From stathisp at gmail.com Fri Jan 1 00:50:37 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jan 2010 11:50:37 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <776662.76806.qm@web36506.mail.mud.yahoo.com> References: <776662.76806.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/1 Gordon Swobe : >> The brain or computer is the physical object instantiating the mind, >> like a sphere made of stone is a physical instantiation of an abstract >> sphere. You can destroy a physical sphere but you can't destroy the >> abstract sphere > > It seems then that you suppose yourself as possessing or equaling this "abstract sphere of mind" that your brain instantiated, and that you suppose further that this abstract sphere of yours will continue to exist after your body dies. Correct me if I'm wrong. After I die my mind can be instantiated again multiple times, with different matter. If the brain were identical with the mind this would not be possible: a copy of a brain, however faithful, is a different physical object. The function of the brain can be reproduced, but not the brain itself, and this is consistent with the mind being reproducible and being a function of the brain. These metaphysical musings are interesting but have no bearing on the rigorous argument presented before, which showed that whatever the mind is, if the function of the device generating it is reproduced then the mind is also reproduced. You did not come up with a rebuttal, other than to put forward your feeling that some magic would happen to stop it being true. -- Stathis Papaioannou From stathisp at gmail.com Fri Jan 1 01:32:36 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jan 2010 12:32:36 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> Message-ID: 2009/12/31 scerir : > [Stathis] > We can compute probabilistic answers, often with high certainty, where true > randomness is evolved (eg. I predict that I won't quantum tunnel to the other > side of the Earth), or we can use pseudorandom number generators. I don't think > anyone has shown a situation where true random can be distinguished from > pseudorandom, but even if that should be a stumbling block in simulating a > brain, it would be possible to bypass it by including a true random source, > such as radioactive decay, in the machine. > > # > > To my knowledge there are: > -pseudo-randomness, which is computable and deterministic; specific softwares > are the sources. > -quantum randomness, which is uncomputable (not by definition, but because of > theorems; no Turing machine can enumerate an infinity of correct bits of the > sequence produced by a quantum device); there are several sources (radioactive > decays; arrival times; beam splitters; metastable states decay; etc.) > -algorithmic randomness, which is uncomputable (I would say by definition). There is a way to produce algorithmic randomness with a Turing machine, requiring something of a trick. The Turing machine runs a program which generates a virtual world containing an observer, and the observer has a piece of paper with a bitstring written on it. At regular intervals the program duplicates the entire virtual world including the observer, but in one copy appends 1 to the bitstring and in the other appends 0. From the point of view of the observer, whether the next bit will be a 1 or a 0 is indeterminate, and the bitstring he has so far truly random. This is despite the fact that the program is completely deterministic from the perspective of an outside observer. It will be obvious that this is a model of quantum randomness under the MWI of QM: God does not play dice, but his creatures do. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Jan 1 02:52:32 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 31 Dec 2009 18:52:32 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <874371.98899.qm@web36502.mail.mud.yahoo.com> --- On Thu, 12/31/09, Stathis Papaioannou wrote: >> It seems then that you suppose yourself as possessing >> or equaling this "abstract sphere of mind" that your brain >> instantiated, and that you suppose further that this >> abstract sphere of yours will continue to exist after your >> body dies. Correct me if I'm wrong. > > After I die my mind can be instantiated again multiple > times, with different matter. I see. So then not only do you believe you have something like a soul (though you use this euphemism "sphere of mind") you believe also in the possible multiple reincarnations of your soul. Interesting. > If the brain were identical with the mind this would > not be possible: Naturally you must also believe in the duality of mind and matter, an idea left over as if as a bad hangover from Descartes and other dualists. Your beliefs above would otherwise make no sense to you. > These metaphysical musings are interesting but have no > bearing on the rigorous argument presented before, On the contrary, they must certainly do. I will tell you this in no uncertain terms: you will never understand Searle until learn to see past the sort of religious ideas you have presented above. And until you understand him, you won't understand what you need to do to refute his argument. You might start here: http://socrates.berkeley.edu/~jsearle/Consciousness1.rtf -gts From gts_2000 at yahoo.com Fri Jan 1 03:18:06 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 31 Dec 2009 19:18:06 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <722968.21666.qm@web36506.mail.mud.yahoo.com> --- On Thu, 12/31/09, Stathis Papaioannou wrote: > rigorous argument presented before, which showed that > whatever the mind is, if the function of the device generating it > is reproduced then the mind is also reproduced. You did not come up > with a rebuttal, other than to put forward your feeling that some > magic would happen to stop it being true. I stated twice during your presentation of your argument that you were speaking of a logical contradiction. I played along anyway, and from this you apparently got the idea that you had somehow presented an argument that I would find convincing. I'll put an experiment to you, and you tell me what the answer should be: "Please imagine that your brain exists as partly real and partly as an abstract formal description of its former reality, and then report your imagined subjective experience." I hope can appreciate how any reasonable person would consider that question incoherent and even ludicrous. I hope you can also see that from my point of view, you asked me that same question. -gts From scerir at libero.it Fri Jan 1 03:57:05 2010 From: scerir at libero.it (scerir) Date: Fri, 1 Jan 2010 04:57:05 +0100 (CET) Subject: [ExI] Some new angle about AI Message-ID: <10954041.186901262318225013.JavaMail.defaultUser@defaultHost> > There is a way to produce algorithmic randomness with a Turing > machine, requiring something of a trick. The Turing machine runs a > program which generates a virtual world containing an observer, and > the observer has a piece of paper with a bitstring written on it. At > regular intervals the program duplicates the entire virtual world > including the observer, but in one copy appends 1 to the bitstring and > in the other appends 0. From the point of view of the observer, > whether the next bit will be a 1 or a 0 is indeterminate, and the > bitstring he has so far truly random. This is despite the fact that > the program is completely deterministic from the perspective of an > outside observer. It will be obvious that this is a model of quantum > randomness under the MWI of QM: God does not play dice, but his > creatures do. > Stathis Papaioannou A Turing machine can compute many things. It cannot compute other things, like (in general) real numbers (because of their incompressibility). I can agree that, like in your example, a perspective from within is different from the perspective of an outsider. A God who does not play dice is well possible (even the late Dirac had that opinion) but the God who plays the ManyWorlds or the Great Programmer who computes all evolutions of all universes, and not the specific evolution of the specific universe, are lazy, IMO. From stathisp at gmail.com Fri Jan 1 04:09:07 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jan 2010 15:09:07 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <874371.98899.qm@web36502.mail.mud.yahoo.com> References: <874371.98899.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/1 Gordon Swobe : >> After I die my mind can be instantiated again multiple >> times, with different matter. > > I see. So then not only do you believe you have something like a soul (though you use this euphemism "sphere of mind") you believe also in the possible multiple reincarnations of your soul. Interesting. Even if it turns out that the brain is uncomputable, the mind can be duplicated by assembling atoms in the same configuration as the original brain. If you accept this and you describe it as transfer of a soul from one body to another, then you believe in a soul. Most scientists and rational philosophers believe this but don't call it a soul, preferring to reserve that term for a supernatural entity created by God. Indeed, those who think your mind *won't* be duplicated if your brain is duplicated at least tacitly believe in a supernatural soul. >> If the brain were identical with the mind this would >> not be possible: > > Naturally you must also believe in the duality of mind and matter, an idea left over as if as a bad hangover from Descartes and other dualists. Your beliefs above would otherwise make no sense to you. Are you a dualist regarding computer programs? On the one hand there is the physical hardware implementing the program, and on the other hand there is the abstract program itself. If that is dualism, then the term could be equally well applied to the mind/body distinction. >> These metaphysical musings are interesting but have no >> bearing on the rigorous argument presented before, > > On the contrary, they must certainly do. I will tell you this in no uncertain terms: you will never understand Searle until learn to see past the sort of religious ideas you have presented above. And until you understand him, you won't understand what you need to do to refute his argument. > > You might start here: > > http://socrates.berkeley.edu/~jsearle/Consciousness1.rtf There isn't actually anything in that paper with which I or most of the others on this list who have been arguing with you will disagree. The only serious error Searle makes is to claim that computer programs can't generate consciousness while at the same time holding that the brain can be described algorithmically. These two ideas lead to an internal inconsistency, which is the worst sort of philosophical error. -- Stathis Papaioannou From stathisp at gmail.com Fri Jan 1 05:29:32 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jan 2010 16:29:32 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <722968.21666.qm@web36506.mail.mud.yahoo.com> References: <722968.21666.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/1 Gordon Swobe : > --- On Thu, 12/31/09, Stathis Papaioannou wrote: > >> rigorous argument presented before, which showed that >> whatever the mind is, if the function of the device generating it >> is reproduced then the mind is also reproduced. You did not come up >> with a rebuttal, other than to put forward your feeling that some >> magic would happen to stop it being true. > > I stated twice during your presentation of your argument that you were speaking of a logical contradiction. I played along anyway, and from this you apparently got the idea that you had somehow presented an argument that I would find convincing. > > I'll put an experiment to you, and you tell me what the answer should be: > > "Please imagine that your brain exists as partly real and partly as an abstract formal description of its former reality, and then report your imagined subjective experience." > > I hope can appreciate how any reasonable person would consider that question incoherent and even ludicrous. I hope you can also see that from my point of view, you asked me that same question. What does "partly as an abstract formal description of its former reality" mean? It certainly could be taken as incoherent nonsense. I asked you no such thing. I asked what would happen if a surgeon installed in your brain artificial neurons which were designed so that they perform the same function as biological neurons. You agreed that it is possible to make such neurons, and you agreed that they could be installed. These are easily understandable, concrete concepts. Such procedures might even become commonplace in a few years time, as treatment for patients who have had strokes or head injuries. Naturally, the patients would be observed after the procedure and they would either behave normally and say that they felt normal, or they would not. It's perfectly straightforward, and the whole experiment from start to finish could be done by technicians with no idea about philosophy of mind. Your insistence that it's nonsense suggests that you have such a strong attachment to your position that you don't want to face any argument that you can see would challenge it. -- Stathis Papaioannou From jonkc at bellsouth.net Fri Jan 1 06:53:15 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 1 Jan 2010 01:53:15 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <853659.71219.qm@web36508.mail.mud.yahoo.com> References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: On Dec 31, 2009, Gordon Swobe wrote: > Simulated objects affect each other in the sense that mathematical abstractions affect each other OK, that's all I need! And mind is an abstraction, that's why computers will be able to produce it exactly, not simulate it, produce it. > and we can make pragmatic use of those abstractions in computer modeling. We can indeed. Pragmatic means something works, but you seem to think that the fact that something works is no reason to think it might be true; you have already demonstrated that you think ideas that don't work, such as your ideas that conflict with evolution, is no reason to think they are untrue. I disagree, I do not think that strategy leads to enlightenment. > But those objects cannot as you claimed in a previous message "burn" each other, nor can they as Stathis claimed have the property of "wetness". Simulated fire doesn't burn things and simulated waterfalls are not "wet". As you have done many many times before you make declarations but don't say why we should believe such statements and you don't even try to refute the arguments against them, you just ignore them. Well I admit that is easier. > It looks like religion to me when people here confuse computer simulations of things with real things I maintain that 3 facts are undeniable: 1) It is virtually certain that random mutation and natural selection produced life on Earth. 2) It is virtually certain that evolution can see intelligent behavior but is blind to consciousness. 3) It is absolutely certain that there is at least one conscious being on this planet. From that I conclude that intelligent behavior must produce consciousness. You say you don't understand how that could be and I don't exactly understand it either but reality is not required to be a slave to our understanding. The way science advances is that evidence amounts that something is puzzling and people start to think of ways of solving the puzzle. Your way is simply to pretend that the evidence doesn't exist, and I don't think objecting to such a philosophy is religious. John K Clark > ,especially when those simulations happen to represent intentional entities, e.g., real people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Jan 1 14:46:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 1 Jan 2010 06:46:46 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <803323.66687.qm@web36501.mail.mud.yahoo.com> --- On Thu, 12/31/09, Stathis Papaioannou wrote: > Even if it turns out that the brain is uncomputable, the > mind can be duplicated by assembling atoms in the same configuration > as the original brain. I happen to agree that we can duplicate a brain atom for atom and have the same person at the end (if I didn't then I would not identify with extropianism) but you had asserted something in a previous post suggesting that your "abstract sphere of mind" exists independently of the physical matter that comprises your brain. In my opinion you fall off the rails there and wander into the land of metaphysical dualism. > Are you a dualist regarding computer programs? No, but you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. The conventional strong AI research program is based on that same false premise, where software = mind and hardware = brain, and it won't work for exactly that reason. > The only serious error Searle makes is to claim that > computer programs can't generate consciousness while at the same > time holding that the brain can be described algorithmically. No error at all, except that you cannot or will not see past your dualist assumptions, or at least not far enough to see what Searle actually means. I had hoped that paper I referenced would bring you some clarity but I see it didn't. What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness in a s/h system that implements it any more than can a simulated thunderstorm cause wetness in that same s/h system. It makes no difference how perfectly that simulation describes the thing it simulates. If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? And why not? Because, as I keep trying to communicate to you, simulations of things do not equal the things they simulate. Descriptions of things do not equal the things they describe. -gts From stathisp at gmail.com Fri Jan 1 16:01:31 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 03:01:31 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <803323.66687.qm@web36501.mail.mud.yahoo.com> References: <803323.66687.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/2 Gordon Swobe : > I happen to agree that we can duplicate a brain atom for atom and have the same person at the end (if I didn't then I would not identify with extropianism) but you had asserted something in a previous post suggesting that your "abstract sphere of mind" exists independently of the physical matter that comprises your brain. In my opinion you fall off the rails there and wander into the land of metaphysical dualism. You destroy a person and make a copy, and you have the "same" person again even if the original has been dead a million years. The physical object doesn't survive, but the mind does; so the mind is not the same as the physical object. Whether you call this dualism or not is a matter of taste. >> Are you a dualist regarding computer programs? > > No, but you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. The conventional strong AI research program is based on that same false premise, where software = mind and hardware = brain, and it won't work for exactly that reason. I was referring to ordinary programs that aren't considered conscious. The program is not identical with the computer, since the same program can be instantiated on different hardware. If you want to call that dualism, you can. >> The only serious error Searle makes is to claim that >> computer programs can't generate consciousness while at the same >> time holding that the brain can be described algorithmically. > > No error at all, except that you cannot or will not see past your dualist assumptions, or at least not far enough to see what Searle actually means. I had hoped that paper I referenced would bring you some clarity but I see it didn't. As I said, I agree with that paper. I just think he's wrong about computers and their potential for consciousness, which in that he only alludes to in passing. > What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness in a s/h system that implements it any more than can a simulated thunderstorm cause wetness in that same s/h system. It makes no difference how perfectly that simulation describes the thing it simulates. > > If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? And why not? Because, as I keep trying to communicate to you, simulations of things do not equal the things they simulate. Descriptions of things do not equal the things they describe. You keep repeating this, but I have shown that a device which reproduces the behaviour of a biological brain will also reproduce the consciousness. The argument is robust in that it relies on no other philosophical or scientific assumptions. How the brain behaviour is reproduced is not actually part of the argument. If it turns out that the brain's behaviour can be described algorithmically, as Searle and most cognitive scientists believe, then that establishes computationalism; if not, it still establishes functionalism by another means. -- Stathis Papaioannou From nanite1018 at gmail.com Fri Jan 1 16:08:30 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Fri, 1 Jan 2010 11:08:30 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <803323.66687.qm@web36501.mail.mud.yahoo.com> References: <803323.66687.qm@web36501.mail.mud.yahoo.com> Message-ID: <9CC8BBA6-D1E8-4834-9312-D6A6C25ECA7F@GMAIL.COM> > If you expect to find consciousness in or stemming from a computer > simulation of a brain then I would suppose you might also expect to > eat a photo of a ham sandwich off a lunch menu and find that it > tastes like the ham sandwich it simulates. After all, on your logic > the simulation of the ham sandwich is implemented in the substrate > of the menu. But that piece of paper won't taste much like a ham > sandwich, now will it? And why not? Because, as I keep trying to > communicate to you, simulations of things do not equal the things > they simulate. Descriptions of things do not equal the things they > describe. > > -gts I'll just jump in to say that this is a bad analogy, at best. Consciousness is not a thing in the world that makes things happen directly. Consciousness only effects the world by giving "directing" the body to do things. If your simulation of a ham sandwich can also interact with my taste buds exactly like a ham sandwich (akin to hooking up a simulation of the brain to a body through electro-neuro connections, etc.) then fine. But a really good photo isn't a perfect simulation, and it certainly cannot interact with the world in the way a sandwich actually does. Same thing with your other analogy about thunderstorms. A simulation of a thunderstorm can't make things wet in the real world because it is in a computer. But it can make the entities in the simulation "wet". And if you had a really complex machine that could make wind and distribute water molecules and have a big screen to show a photo of what the thunderstorm would look like from the ground, well then it could make things wet. Divorcing the simulation from the world will prevent it from doing the things that the real thing would do. But if you connect it to the "real" world in a way that lets it do everything it would normally do (all the outputs from your simulation of the brain, for example, direct a body, and all the inputs from the body's senses go to the simulation), then it will do exactly what it normally does. So all you have to do is connect your simulation of a brain to a body, and it will be just like the actual brain. Joshua Job nanite1018 at gmail.com From gts_2000 at yahoo.com Fri Jan 1 16:20:13 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 1 Jan 2010 08:20:13 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <935067.36966.qm@web36503.mail.mud.yahoo.com> --- On Fri, 1/1/10, Stathis Papaioannou wrote: >> I'll put an experiment to you, and you tell me what >> the answer should be: >> >> "Please imagine that your brain exists as partly real >> and partly as an abstract formal description of its former >> reality, and then report your imagined subjective >> experience." > >> I hope can appreciate how any reasonable person would >> consider that question incoherent and even ludicrous. I hope >> you can also see that from my point of view, you asked me >> that same question. > What does "partly as an abstract formal description of its > former reality" mean? It means that programs exist as formal descriptions of real or supposed objects or processes. They describe and simulate real objects and real processes but they do not equal them. > I asked you no such thing. You did but apparently you didn't understand me well enough to realize it. > I asked what would happen if a > surgeon installed in your brain artificial neurons which were > designed so that they perform the same function as biological neurons. I have no problem with artificial neurons, per se. I have a problem with the notion that programs that simulate real objects and processes, such as those that exist in your plan for artificial neurons, can have the same sort of reality as the neurological objects and processes they simulate. They can't. You might just as well have asked me to imagine myself as imaginary, whatever that means. -gts From gts_2000 at yahoo.com Fri Jan 1 16:22:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 1 Jan 2010 08:22:21 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <232571.11555.qm@web36501.mail.mud.yahoo.com> -- On Thu, 12/31/09, Stathis Papaioannou wrote: > Even if it turns out that the brain is uncomputable, the > mind can be duplicated by assembling atoms in the same configuration > as the original brain. I happen to agree that we can duplicate a brain atom for atom and have the same person at the end (if I didn't then I would not identify with extropianism) but you had asserted something in a previous post suggesting that your "abstract sphere of mind" exists independently of the physical matter that comprises your brain. In my opinion you fall off the rails there and wander into the land of metaphysical dualism. > Are you a dualist regarding computer programs? No, but you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. The conventional strong AI research program is based on that same false premise, where software = mind and hardware = brain, and it won't work for exactly that reason. > The only serious error Searle makes is to claim that > computer programs can't generate consciousness while at the same > time holding that the brain can be described algorithmically. No error at all, except that you cannot or will not see past your dualist assumptions, or at least not far enough to see what Searle actually means. I had hoped that paper I referenced would bring you some clarity but I see it didn't. What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness in a s/h system that implements it any more than can a simulated thunderstorm cause wetness in that same s/h system. It makes no difference how perfectly that simulation describes the thing it simulates. If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? And why not? Because, as I keep trying to communicate to you, simulations of things do not equal the things they simulate. Descriptions of things do not equal the things they describe. -gts From sparge at gmail.com Fri Jan 1 16:51:20 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 1 Jan 2010 11:51:20 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/1 John Clark : > > I maintain that 3 facts are undeniable: > > 1) It is virtually certain that random mutation and natural selection > produced life on Earth. > 2) It is?virtually certain that evolution can see intelligent behavior but > is blind to consciousness. If you mean that intelligence improves a species' ability to survive then I agree. > 3) It is absolutely certain that there is at least one conscious being on > this planet. Granted, allowing for the possibility that "this planet" doesn't really exist as we think it does. > From that I conclude that?intelligent behavior must produce consciousness. OK, here you lost me. I don't see how you can say anything stronger than "intelligent behavior *can* produce consciousness". -Dave From gts_2000 at yahoo.com Fri Jan 1 17:13:23 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 1 Jan 2010 09:13:23 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <528478.52920.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/1/10, Stathis Papaioannou wrote: > You destroy a person and make a copy, and you have the > "same" person again even if the original has been dead a million years. > The physical object doesn't survive, but the mind does Okay, but you'll agree I assume that the person's intentionality goes away completely for a million years? He went away to become food for worms (or to cryo, whatever). We can rightly consider anyone during that million year period who claims his mind still exists as a loon who believes in ghosts. Yes? > > No, but you on the other hand should describe yourself > as such given that you believe we can get intentional > entities from running programs. The conventional strong AI > research program is based on that same false premise, where > software = mind and hardware = brain, and it won't work for > exactly that reason. > > I was referring to ordinary programs that aren't considered > conscious. The program is not identical with the computer, since the > same program can be instantiated on different hardware. If you want to > call that dualism, you can. But I think you would expect the same for a program that had somehow caused strong AI. That is the dualistic approach to strong AI that Searle takes issue with. For strong AI to work (as it does in humans that have the same capability) we need to re-create the substance of it (not merely the form of it as in a program) much like nature did and exactly as you did in your experiment above about recreating a copy of the brain. > As I said, I agree with that paper. I just think he's wrong > about computers and their potential for consciousness, which in > that he only alludes to in passing. I pointed you to that paper to show you his conception of consciousness/intentionality, and because if I remember correctly he also discusses the problem with duality. > > If you expect to find consciousness in or stemming > from a computer simulation of a brain then I would suppose > you might also expect to eat a photo of a ham sandwich off a > lunch menu and find that it tastes like the ham sandwich it > simulates. After all, on your logic the simulation of the > ham sandwich is implemented in the substrate of the menu. > But that piece of paper won't taste much like a ham > sandwich, now will it? And why not? Because, as I keep > trying to communicate to you, simulations of things do not > equal the things they simulate. Descriptions of things do > not equal the things they describe. > > You keep repeating this, but I have shown that a device > which reproduces the behaviour of a biological brain will also > reproduce the consciousness. You didn't show it to me. If you showed me anything, you showed me that an artificial brain that behaves like a real brain but does not have the material substance of a real brain will result in a mindless cartoon character that merely acts like he has intentionality, i.e., weak AI. You'll find it easier to see if you replace his entire brain with a formal programmatic description of it. Programs merely describe the real or supposed things that they're about. They're the depiction of food on a lunch menu, not the food itself. -gts From stefano.vaj at gmail.com Fri Jan 1 22:01:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 1 Jan 2010 23:01:50 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <945049.54403.qm@web65608.mail.ac4.yahoo.com> Message-ID: <580930c21001011401v4ea07bc4ode20d816362ec69e@mail.gmail.com> 2009/12/30 Stathis Papaioannou : > If the brain is computable it does not necessarily mean there will be > computational shortcuts in predicting human behaviour. You may just > have to simulate the human and let the program run to see what > happens. How can the brain not be computable as far as its *computations* are concerned? Because the real point of AGI is certainly not that of replicating, say, its metabolism... -- Stefano Vaj From stefano.vaj at gmail.com Fri Jan 1 22:20:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 1 Jan 2010 23:20:44 +0100 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> Message-ID: <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> 2009/12/31 John Clark : > On Dec 30, 2009, at 3:04 PM, scerir wrote: > no Turing machine can enumerate an infinity of correct bits of the > sequence produced by a quantum device > > It's worse than that, there are numbers (almost all numbers in fact) that a > Turing machine can't even come arbitrarily close to evaluating. A Quantum > Computer probably couldn't do that either but it hasn't been proven. But we can say that organic brains do much worse than both kinds of computers at mathematical problems... -- Stefano Vaj From stathisp at gmail.com Sat Jan 2 01:21:50 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 12:21:50 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <935067.36966.qm@web36503.mail.mud.yahoo.com> References: <935067.36966.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/2 Gordon Swobe : > --- On Fri, 1/1/10, Stathis Papaioannou wrote: > >>> I'll put an experiment to you, and you tell me what >>> the answer should be: >>> >>> "Please imagine that your brain exists as partly real >>> and partly as an abstract formal description of its former >>> reality, and then report your imagined subjective >>> experience." >> >>> I hope can appreciate how any reasonable person would >>> consider that question incoherent and even ludicrous. I hope >>> you can also see that from my point of view, you asked me >>> that same question. > > >> What does "partly as an abstract formal description of its >> former reality" mean? > > It means that programs exist as formal descriptions of real or supposed objects or processes. They describe and simulate real objects and real processes but they do not equal them. > >> I asked you no such thing. > > You did but apparently you didn't understand me well enough to realize it. Right, I asked you the question from the point of view of a concrete-thinking technician. This simpleton sets about building artificial neurons from parts he buys at Radio Shack without it even occurring to him that the programs these parts run are formal descriptions of real or supposed objects which simulate but do not equal the objects. When he is happy that his artificial neurons behave just like the real thing he has his friend the surgeon, also technically competent but not philosophically inclined, install them in the brain of a patient rendered aphasic after a stroke. We can add a second part to the experiment in which the technician builds another set of artificial neurons based on clockwork nanomachinery rather than digital circuits and has them installed in a second patient, the idea being that the clockwork neurons do not run formal programs. You then get to talk to the patients. Will both patients be able to speak equally well? If so, would it be right to say that one understands what he is saying and the other doesn't? Will the patient with the clockwork neurons report he feels normal while the other one reports he feels weird? Surely you should be able to observe *something*. If you coped with the Chinese Room thought experiment but you claim the one I have just described is incoherent or ridiculous then you are being intellectually dishonest. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 2 02:00:42 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 13:00:42 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001011401v4ea07bc4ode20d816362ec69e@mail.gmail.com> References: <945049.54403.qm@web65608.mail.ac4.yahoo.com> <580930c21001011401v4ea07bc4ode20d816362ec69e@mail.gmail.com> Message-ID: 2010/1/2 Stefano Vaj : > 2009/12/30 Stathis Papaioannou : >> If the brain is computable it does not necessarily mean there will be >> computational shortcuts in predicting human behaviour. You may just >> have to simulate the human and let the program run to see what >> happens. > > How can the brain not be computable as far as its *computations* are > concerned? Because the real point of AGI is certainly not that of > replicating, say, its metabolism... It's more an issue for mind uploading that for AGI. The only certain way to simulate a brain is to simulate the activity of neurons at the molecular level. Even if we look at a simple binary behaviour such as whether a neuron fires or not it will be dependent on everything that goes on inside the cell. There will probably be allowable computational shortcuts but we can't know without careful research what these shortcuts will be. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 2 02:06:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 13:06:51 +1100 Subject: [ExI] Some new angle about AI. In-Reply-To: <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> Message-ID: 2010/1/2 Stefano Vaj : > But we can say that organic brains do much worse than both kinds of > computers at mathematical problems... But organic brains do better than computers at the highest level of mathematical creativity. Interestingly, it is this rather than the ability to have feelings, produce art etc. that Roger Penrose used in his case against AI. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 2 04:12:44 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 15:12:44 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <528478.52920.qm@web36505.mail.mud.yahoo.com> References: <528478.52920.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/2 Gordon Swobe : > You didn't show it to me. If you showed me anything, you showed me that an artificial brain that behaves like a real brain but does not have the material substance of a real brain will result in a mindless cartoon character that merely acts like he has intentionality, i.e., weak AI. > > You'll find it easier to see if you replace his entire brain with a formal programmatic description of it. Programs merely describe the real or supposed things that they're about. They're the depiction of food on a lunch menu, not the food itself. The reason I insist on the partial replacement experiment is that it shows the absurdity of your position by forcing you to consider what effect functionally identical but mindless components would have on the rest of the brain. But it seems you are so sure your position is correct that you consider any argument purporting to show otherwise as wrong by definition, even if you can't point out where the problem is. -- Stathis Papaioannou From jonkc at bellsouth.net Sat Jan 2 05:44:07 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 2 Jan 2010 00:44:07 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <803323.66687.qm@web36501.mail.mud.yahoo.com> References: <803323.66687.qm@web36501.mail.mud.yahoo.com> Message-ID: <03342575-E77C-4674-92A0-3972A4BD7FC0@bellsouth.net> On Jan 1, 2010, Gordon Swobe wrote: > In my opinion you fall off the rails there and wander into the land of metaphysical dualism. It may be dualism to say that what a thing is and what a thing does are not the same, but it's not metaphysical it's just logical. For example, saying mind is what a brain does is no more metaphysical than saying going fast is what a racing car does. > you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. Intentional means calculable, and calculable sounds to me to be something programs should be rather good at. > The conventional strong AI research program is based on that same false premise There is no such thing as strong AI research, there is just AI research. Nobody is doing Artificial Consciousness research because claiming success would be just too easy. > > If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. I haven't actually tried to do it but I don't believe that would work very well. It's just a hunch. > After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? No. > And why not? Because a ham sandwich is a noun and a photo of one is a very different noun and consciousness is not even a noun at all. > What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness So you and Searle keep telling us over and over and over again, but Gordon, my problem is that I think Charles Darwin was smarter than either one of you. And the fossil record also thinks Darwin was smarter. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Sat Jan 2 07:15:17 2010 From: scerir at libero.it (scerir) Date: Sat, 2 Jan 2010 08:15:17 +0100 (CET) Subject: [ExI] Some new angle about AI Message-ID: <8449203.201761262416517490.JavaMail.defaultUser@defaultHost> [Stefano] But we can say that organic brains do much worse than both kinds of computers at mathematical problems... [Stathis] But organic brains do better than computers at the highest level of mathematical creativity. Interestingly, it is this rather than the ability to have feelings, produce art etc. that Roger Penrose used in his case against AI. # There is an interesting quote here, about the importance intuition (or, to say it better, mathematical intuition) as opposed to the undecidability/uncomputability. "I don't see any reason why we should have less confidence in this kind of perception, i.e., in mathematical intuition, than in sense perception, which induces us to build up physical theories and to expect that future sense perceptions will agree with them and, moreover, to believe that a question not decidable now has meaning and may be decided in the future." - K.Godel, 'What is Cantor's Continuum Problem?', Philosophy of Mathematics, ed. P.Benacerraf & H. Putnam, p. 483, (year and publisher unknown). From stefano.vaj at gmail.com Sat Jan 2 10:03:55 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 2 Jan 2010 11:03:55 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <945049.54403.qm@web65608.mail.ac4.yahoo.com> <580930c21001011401v4ea07bc4ode20d816362ec69e@mail.gmail.com> Message-ID: <580930c21001020203q4800fac2s93777961471637f1@mail.gmail.com> 2010/1/2 Stathis Papaioannou : > It's more an issue for mind uploading that for AGI. The only certain > way to simulate a brain is to simulate the activity of neurons at the > molecular level. Even if we look at a simple binary behaviour such as > whether a neuron fires or not it will be dependent on everything that > goes on inside the cell. There will probably be allowable > computational shortcuts but we can't know without careful research > what these shortcuts will be. Even without shortcuts, or approximations "good enough" not to imply any perceivable behavioural modifications, an organic brain is a relatively small system with a very finite number of states. Even very big brains have some 10^20 or something molecules, and that of. say, fruitflies orders of magnitude fewer... -- Stefano Vaj From kanzure at gmail.com Sat Jan 2 14:37:23 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 2 Jan 2010 08:37:23 -0600 Subject: [ExI] Fwd: [wta-talk] Enhancement's Good, But Is It Necessary? In-Reply-To: <3d1c82451001012016w2f8d1ed1jadc8ac1d53107de8@mail.gmail.com> References: <4B3B5A4E.6020107@gmail.com> <580930c21001011418y63188b1cjaac5d747fc966b30@mail.gmail.com> <3d1c82451001012016w2f8d1ed1jadc8ac1d53107de8@mail.gmail.com> Message-ID: <55ad6af71001020637x5bccd77ey37927d74a3b91133@mail.gmail.com> ---------- Forwarded message ---------- From: Christopher Healey Date: Fri, Jan 1, 2010 at 10:16 PM Subject: Re: [wta-talk] Enhancement's Good, But Is It Necessary? To: Humanity+ Discussion List > how can you justify prioritizing transhumanism? It's much easier to scale a sheer face with the proper gear. ?We stand at the base camp of many problems that tower over us menacingly, and we might even be equipped to tackle a few of them. ?But which few? Given limited resources, which problems are the most moral to ignore? We all employ tools to effect changes within our sphere of influence; better gear can deliver more leverage to assail more challenges. Perhaps, if one does it right, to assail entire classes of challenge in one fell swoop. Transhumanism simply recognizes that as we zoom inward from our sphere of influence's farthest reaches, it doesn't stop at our skin, but continues inward to our deepest structure. ? To be responsible to our intentions of a better world, we are compelled to look not only at external, but also internal changes; if we can safely deliver these internal *choices*, how could we morally squander such leverage? ?It's all about getting there (a better world), from here. ?Safely and responsibly, of course. ?Transhumanism is a rooted sub-goal of seeking a better future. From gts_2000 at yahoo.com Sat Jan 2 14:50:35 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 06:50:35 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <688803.71262.qm@web36507.mail.mud.yahoo.com> --- On Fri, 1/1/10, Stathis Papaioannou wrote: > Right, I asked you the question from the point of view of > a concrete-thinking technician. This simpleton sets about > building artificial neurons from parts he buys at Radio Shack > without it even occurring to him that the programs these parts run > are formal descriptions of real or supposed objects which simulate but > do not equal the objects. When he is happy that his artificial > neurons behave just like the real thing he has his friend the surgeon, > also technically competent but not philosophically inclined, > install them in the brain of a patient rendered aphasic after a stroke. The surgeon replaces all those neurons relevant to correcting the patient's aphasia with a-neurons programmed and configured in such a way that the patient will pass the Turing test while appearing normal and healthy. We don't know in 2009 if this requires work in areas outside Wernicke's but we'll assume our surgeon here knows. The TT and the subject's reported symptoms represent the surgeon's only means of measuring the supposed health of his patient. > We can add a second part to the experiment in which the technician > builds another set of artificial neurons based on clockwork nanomachinery > rather than digital circuits and has them installed in a second > patient, the idea being that the clockwork neurons do not run formal > programs. A second surgeon does the same with this patient, releasing him from the hospital after he appears healthy and passes the TT. > You then get to talk to the patients. Will both patients be > able to speak equally well? Yes. > If so, would it be right to say that one understands what he is saying > and the other doesn't? Yes. On Searle's view the TT gives false positives for the first patient. > Will the patient with the clockwork neurons report he feels normal while > the other one reports he feels weird? Surely you should be able to > observe *something*. If either one appears or reports feeling abnormal, we send him back to the hospital. -gts From gts_2000 at yahoo.com Sat Jan 2 15:29:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 07:29:02 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <03342575-E77C-4674-92A0-3972A4BD7FC0@bellsouth.net> Message-ID: <393289.30312.qm@web36506.mail.mud.yahoo.com> --- On Sat, 1/2/10, John Clark wrote: >> In my opinion you fall off the rails there and wander into the land of >> metaphysical dualism. > > It may be dualism to say that what a thing is > and what a thing does are not the same, but it's not > metaphysical it's just logical. I see a metaphysical problem only when people assert that the mind exists as some sort of abstract entity (programmatic, algorithmic, whatever) distinct from the brain that actually does the work that we describe with those abstractions. If we want to say that mind exists in such abstract idealistic ways, that's fine, but now we must contend with all the problems associated with metaphysical dualism. Where does that mind exist? In the platonic realm? In the mind of god? Where? And how can idealistic entities affect the material world? And so on. I would rather not go down that road, nor would Searle, and I assume nobody here wants to go there either. > Intentional means calculable, and?calculable?sounds to me to be something > programs should be rather good at.? Good at simulating intentionality, yes. >> If you expect to find consciousness in or stemming >> from a computer simulation of a brain then I would suppose >> you might also expect to eat a photo of a ham sandwich off a >> lunch menu and find that it tastes like the ham sandwich it >> simulates. > I haven't actually tried to do it but I don't > believe that would work very well. It's just a hunch. Good hunch. > Because a ham sandwich is a noun and a photo of one is a very different > noun and consciousness is not even a noun at all. My point is that simulations only, ahem, simulate the things they simulate. The system in which we implement a simulation will not equal or contain the thing it simulates. It does not matter what we want to simulate, nor does it matter whether we use software and to implement it in hardware or photos of ham sandwiches to implement it in lunch menus. No matter what we do, simulations of real things will never equal the real things they simulate. I don't see this as an especially difficult concept to fathom, and it has nothing to do with Darwin! -gts From lcorbin at rawbw.com Sat Jan 2 16:03:27 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 02 Jan 2010 08:03:27 -0800 Subject: [ExI] Continued use of the term "intellectuals" is not productive Message-ID: <4B3F6E4F.1000307@rawbw.com> John Clark wrote: > On Dec 30, 2009, Tomasz Rola wrote in Re: [ExI] I accuse intellectuals... or else: > > > I wanted to make a list of intellectual Genghis Khans. > > > > Rafal suggested on Sun, 27 Dec 2009, > > > > > ### Maynard Keynes, John Kenneth Galbraith, Karl Marx, Friedrich > > > Engels, Noam Chomsky, almost any random sociologist since Emile > > > Durkheim, Upton Sinclair, Paul Krugman, Joseph Lincoln Steffens, > > > Albert Einstein, Jeremy Rifkin - collectively contributing to the > > > enactment of a staggering number of stupid policies, starting with > > > meat packing regulations and genetic engineering limits all the way to > > > affirmative action, social security, and the Fed. > > > [Damien added] > > > In US: the neocons are intellectuals, of a sort. They happily provided the > > > Iraq war. Dr. Leon Kass is clearly an intellectual, and he helped to ban > > > embryonic stem cell research... On a more highbrow level, if Heidegger > > > wasn't an intellectual, nobody is. > > I don't say all of these were as evil as Genghis Khan, that's asking for rather a lot, but off the top of my head here are some intellectuals that the world would probably have been better off if they'd never been born: > > Paul of Tarsus > Augustine of Hippo > Martin Luther > Jean-Paul Marat > Vladimir Lenin > Philipp Lenard For long I supposed that there just was confusion between "intellectuals" and "evil intellectuals", and this was getting added to the conviction by many that by definition an "intellectual" is a pointy-headed type leftist. To these people, and there are a lot of them, it would never occur that James Watson or Gauss was an intellectual. Paul Johnson didn't help with his book "Intellectuals", though it does contain revealing and utterly devastating biographical sketches about Jean-Jacques Rousseau : 'An Interesting Madman' Shelley, or the Heartlessness of Ideas Karl Marx : 'Howling Gigantic Curses' Henrik Ibsen: 'On the Contrary' Tolstoy: God's Elder Brother The Deep Waters of Ernest Hemingway Jean-Paul Sartre: 'A Little Ball of Fur and Ink" Edmund Wilson: A Brand from the Burning The Troubled Conscience of Victor Gollancz Lies, Damned Lies and Lillian Hellman But I would have preferred is to retain "intellectual" for someone who, well, engages in intellectual activity, and I always tried to think of myself and my friends as such. But the cause is hopeless: Sadly, the term now creates so much confusion is that the only prudent recourse is to drop it, and to just say what you mean instead. This is one of those cases where it is utterly pointless to argue about the meaning of a term, as disappointed as will be those who want to bandy it about as opprobrium. Lee From jonkc at bellsouth.net Sat Jan 2 16:21:20 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 2 Jan 2010 11:21:20 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <393289.30312.qm@web36506.mail.mud.yahoo.com> References: <393289.30312.qm@web36506.mail.mud.yahoo.com> Message-ID: On Jan 2, 2010, Gordon Swobe wrote: >> I see a metaphysical problem only when people assert that the mind exists as some sort of abstract entity (programmatic, algorithmic, whatever) distinct from the brain Fast is abstract, I can't hold fast in my hands, and fast is distinct from a racing car just as mind is not the same as a brain. What's all spooky and metaphysical about that? >> Intentional means calculable, and calculable sounds to me to be something >> programs should be rather good at. > > Good at simulating intentionality, yes. As long as the machine "simulates" intentionality with the same fidelity that it can "simulate" arithmetic or music I don't see there being the slightest problem. And if intentional means calculable or being directed to some object or goal then I can see absolutely no reason a machine couldn't do that, in fact they have been doing exactly that for years. I can only conclude that in Gordon-Speak the word "simulate" means done by a machine and it means precisely nothing more. > > My point is that simulations only, ahem, simulate the things they simulate. You have only one point, machines do simulations. I agree. > I don't see this as an especially difficult concept to fathom, and it has nothing to do with Darwin! OF COURSE IT HAS SOMETHING TO DO WITH DARWIN! But why bother, I've explained exactly why its all about Darwin about 27 times but like so many other logical holes in your theory you don't even try to refute them, you just ignore them; and then repeat the exact same tired old discredited pronouncements with no more evidence to support them than the first time round. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Jan 2 16:46:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 08:46:20 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <03342575-E77C-4674-92A0-3972A4BD7FC0@bellsouth.net> Message-ID: <39800.73598.qm@web36506.mail.mud.yahoo.com> --- On Sat, 1/2/10, John Clark wrote: > There is no such thing as?strong AI > research, there is just?AI research. Nobody is doing > Artificial Consciousness research because claiming success > would be just too easy.? Stathis and I engage in such research on this list, even as you watch and participate. > Because a ham sandwich is a noun and a photo of one is a very different > noun and consciousness is not even noun at all. My dictionary calls it a noun. Stathis argues not without reason that if we can compute the brain then computer simulations of brains should have intentionality. I argue that even if we find a way to compute the brain, it does not follow that a simulation of it would have intentionality any more than it follows that a computer simulation of a ham sandwich would taste like a ham sandwich, or that a computer simulation of a waterfall would make a computer wet. Computer simulations of things do not equal the things they simulate. I recall learning of a tribe of people in the Amazon forest or some such place that had never seen cameras. After seeing their photos for the first time, they came to fear them on the grounds that these amazing simulations of themselves captured their spirits. Not only did these naive people believe in spirits, they must also have believed that simulations of things somehow equal the things they simulate. -gts From gts_2000 at yahoo.com Sat Jan 2 17:17:38 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 09:17:38 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <955017.56632.qm@web36504.mail.mud.yahoo.com> --- On Sat, 1/2/10, John Clark wrote: >> I don't see this as an especially difficult concept to fathom, and it >> has nothing to do with Darwin! > > OF COURSE IT HAS SOMETHING TO DO WITH DARWIN!? There you go again. If you think I have an issue with Darwin then either you don't understand me or you don't understand Darwin. I happen to count myself as a big fan of evolution, including evolutionary psychology. I subscribe to Richard Dawkins' gene-centric interpretation. I have ignored your noises about this subject because usually I have a very little time on my hands and more interesting things to write about. -gts From gts_2000 at yahoo.com Sat Jan 2 18:26:06 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 10:26:06 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: Message-ID: <68150.34111.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/1/10, Stathis Papaioannou wrote: > The only certain way to simulate a brain is to simulate the activity > of neurons at the molecular level. I agree with your general direction but I wonder how you know we needn't simulate them at the atomic or subatomic level. How do you know it's not turtles all the way down? At the end of that philosophical tunnel, the simulation of the thing finally becomes the thing it simulates. Form and matter converge. -gts From aware at awareresearch.com Sat Jan 2 18:27:32 2010 From: aware at awareresearch.com (Aware) Date: Sat, 2 Jan 2010 10:27:32 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: Offered, for your consideration, a wondrous land where its denizens share an appreciation of novelty, and discovery of the subtle but meaningful patterns underlying it all. ?You're entering a dimension not only of sight and sound, but of mind. ?This land is a refuge for the rare race known as the xNTx, and into this land you, an ISTJ or Inspector Guardian, have just stumbled. ?That's the signpost just ahead, your next stop, the Extropy list. Guardian Swobe, as an ISTJ, in your travels among rationalists, idealists, and artisans, you might do well to share this: - Jef (INTJ) From lcorbin at rawbw.com Sat Jan 2 19:22:45 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 02 Jan 2010 11:22:45 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <68150.34111.qm@web36508.mail.mud.yahoo.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> Message-ID: <4B3F9D05.40903@rawbw.com> Gordon wrote: > Stathis wrote: > >> The only certain way to simulate a brain is to simulate the activity >> of neurons at the molecular level. I assume this means at the input/output level only; that anything further would not add to the experience being had by the entity. > I agree with your general direction but I wonder how you > know we needn't simulate them at the atomic or subatomic > level. How do you know it's not turtles all the way down? Let's suppose for a moment that Gordon is right. In other words, internal mechanisms of the neuron must also be simulated. I want to step back and reexamine the reason that all of this is important, and how our reasoning about it must be founded on one axiom that is quite different from the other scientific ones. And that axiom is moral: if presented with two simulations only one of which is a true emulation, and they're both exhibiting behavior indicating extreme pain, we want to focus all relief efforts only on the one. We really do *not* care a bit about the other. (Again, good philosophy is almost always prescriptive, or entails prescriptive implications.) For those of us who are functionalists (or, in my case, almost 100% functionalists), it seems almost inconceivable that the causal components of an entity's having an experience require anything beneath the neuron level. In fact, it's very likely that the simulation of whole neuron tracks or bundles suffice. But I have no way of going forward to address Gordon's question. Logically, we have no way of knowing that in order to emulate experience, you have to simulate every single gluon, muon, quark, and electron. However, we can *never* in principle (so far as I can see) begin to answer that question, because ultimately, all we'll finally have to go on is behavior (with only a slight glance at the insides). I merely claim that if Gordon or anyone else who doubts were to live 24/7 for years with an entity that acted wholly and completely human, yet who was a known simulation at, say, the neuron level, entirely composed of transistors whose activity could be single-stepped through, then Gordon or anyone else would soon apply the compassionate axiom, and find himself or herself incapable of betraying or inflicting pain on his or her new friend anymore than upon a regular human. Lee From aware at awareresearch.com Sat Jan 2 20:09:13 2010 From: aware at awareresearch.com (Aware) Date: Sat, 2 Jan 2010 12:09:13 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <4B3F9D05.40903@rawbw.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> Message-ID: On Sat, Jan 2, 2010 at 11:22 AM, Lee Corbin wrote: > Let's suppose for a moment that Gordon is right. In other > words, internal mechanisms of the neuron must also be > simulated. Argh,"turtles all the way down", indeed. Then must nature also compute the infinite expansion of the digits of pi for every soap bubble as well? > I want to step back and reexamine the reason that all of this > is important, and how our reasoning about it must be founded > on one axiom that is quite different from the other scientific > ones. > > And that axiom is moral: if presented with two simulations > only one of which is a true emulation, and they're both > exhibiting behavior indicating extreme pain, we want to > focus all relief efforts only on the one. We really do > *not* care a bit about the other. This way too leads to contradiction, for example in the case of a person tortured, then with memory erased, within a black box. The morality of any act depends not on the **subjective** state of another, which by definition one could never know, but on our assessment of the rightness, in principle, of the action, in terms of our values. > For those of us who are functionalists (or, in my case, almost > 100% functionalists), it seems almost inconceivable that the causal > components of an entity's having an experience require anything > beneath the neuron level. In fact, it's very likely that the > simulation of whole neuron tracks or bundles suffice. Let go of the assumption of an **essential** consciousness, and you'll see that your functionalist perspective is entirely correct, but it needs only the level of detail, within context, to evoke the appropriate responses of the observer. To paraphrase John Clark, "swiftness" is not in the essence of a car, and the closer one looks the less apt one is to find it. Furthermore (and I realize that John didn't say /this/), a car displays "swiftness" only within an appropriate context. But key is understanding is that this "swiftness" (separate from formal descriptions of rotational velocity, power, torque, etc.) is a function of the observer. > But I have no way of going forward to address Gordon's > question. Logically, we have no way of knowing (and this is an example where logic fails but reason still prevails) > that in > order to emulate experience, you have to simulate every > single gluon, muon, quark, and ?electron. However, we > can *never* in principle (so far as I can see) begin to > answer that question, because ultimately, all we'll > finally have to go on is behavior (with only a slight > glance at the insides). > I merely claim that if Gordon or anyone else who doubts > were to live 24/7 for years with an entity that acted > wholly and completely human, yet who was a known simulation > at, say, the neuron level, entirely composed of transistors > whose activity could be single-stepped through, then Gordon > or anyone else would soon apply the compassionate axiom, > and find himself or herself incapable of betraying or > inflicting pain on his or her new friend anymore than > upon a regular human. And here, despite a ripple (more accurately a fold, or non-monotonicity) and a veering off to infinity on one side of your map of reality, you and I can agree on your conclusion. Happy New Year, Lee. - Jef From jonkc at bellsouth.net Sat Jan 2 21:11:02 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 2 Jan 2010 16:11:02 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <39800.73598.qm@web36506.mail.mud.yahoo.com> References: <39800.73598.qm@web36506.mail.mud.yahoo.com> Message-ID: <767A65B2-6F57-48F1-88F1-EB4D003E3F66@bellsouth.net> On Jan 2, 2010, at 11:46 AM, Gordon Swobe wrote: > My dictionary calls it [consciousness] a noun. Yes and dictionaries also call "I" a pronoun, and we know how much confusion that colossal error has given the world. Lexicographers make very poor philosophers. > Stathis argues not without reason that if we can compute the brain then computer simulations of brains should have intentionality. Punch card readers from the 1950's had intentionality, at least that's what your lexicographers think, the machine could do things that were calculable and could be directed to a goal. And I remind you that it was you not me that insisted on using the word intentionality rather than consciousness; I suppose you thought it sounded cooler. > I argue that even if we find a way to compute the brain, it does not follow that a simulation of it would have intentionality You haven't argued anything. An argument isn't just contradiction, an argument is a connected series of statements intended to establish a proposition. You may object to this meaning but I really must insist that argument is an intellectual process. Contradiction is just the automatic gainsaying of any statement. Look, if I argue with you, I must take up a contrary position. Yes, but that's not just saying 'No it isn't. Yes it is! No it isn't! Yes it is! I'm sorry, but your time is up and I'm not allowed to argue anymore. I want to thank Professor Python for the invaluable help he gave mein writing this post. John K Clark > any more than it follows that a computer simulation of a ham sandwich would taste like a ham sandwich, or that a computer simulation of a waterfall would make a computer wet. Computer simulations of things do not equal the things they simulate. > > I recall learning of a tribe of people in the Amazon forest or some such place that had never seen cameras. After seeing their photos for the first time, they came to fear them on the grounds that these amazing simulations of themselves captured their spirits. Not only did these naive people believe in spirits, they must also have believed that simulations of things somehow equal the things they simulate. > > -gts > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 2 21:38:14 2010 From: spike66 at att.net (spike) Date: Sat, 2 Jan 2010 13:38:14 -0800 Subject: [ExI] quiz for the new year Message-ID: >Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? >Yes No Cannot be determined The answer is YES of course. Regardless of Anne's marrital status, either way a married person is looking at, perhaps gazing fondly and lustily, upon an unmarried person. I supplied the adverbs, but the quiz came from the excellent article below on irrationality in smart people: http://www.magazine.utoronto.ca/feature/why-people-are-irrational-kurt-klein er/ How many answered it correctly? How many of you horndogs are like me, pondering the comely figure of Anne, instead of concentrating your intelligence on being rational? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 2 21:22:17 2010 From: spike66 at att.net (spike) Date: Sat, 2 Jan 2010 13:22:17 -0800 Subject: [ExI] quiz for the new year Message-ID: <7E6C40DDF6AD4E8D854E951388AEFF0C@spike> Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? Yes No Cannot be determined -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 2 22:16:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 2 Jan 2010 17:16:56 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <955017.56632.qm@web36504.mail.mud.yahoo.com> References: <955017.56632.qm@web36504.mail.mud.yahoo.com> Message-ID: On Jan 2, 2010, Gordon Swobe wrote: > If you think I have an issue with Darwin then either you don't understand me or you don't understand Darwin. I'm sure emotionally you side with Darwin, but you haven't pondered his ideas in any depth because if you had you'd know that the idea that consciousness and intelligence can be separated and are caused by processes that have nothing to do with each other is 100% contradictory to Darwin's insight. Foolish creationists who don't understand Darwin like to say that life couldn't have come about by chance alone and they're right, it couldn't come about by chance. Darwin in what was probably the single best idea any member of our species ever had came up with a way to explain not only how life came to be but also how intelligence did. However if consciousness and intelligence were not just 2 sides of the same thing but as you believe entirely separate phenomenon then science has no explanation of how consciousness came to be on this small blue planet. And yet somehow consciousness did come to be, I am conscious and its not entirely outside the laws of possibility that you are too. You ask us to believe that besides the process that produced life and intelligence working in parallel with that and entirely at random a different... something... created consciousness. For the first time in my life I know what a creationist who doesn't understand Darwin feels like. THE ENTIRE THING IS JUST BRAIN DEAD DUMB. > I have ignored your noises about this subject because usually I have a very little time on my hands > and more interesting things to write about. Wow! I sure wish I knew where you wrote those more interesting things, things more interesting than life or intelligence. No doubt you won't respond to any of my points because you're too busy explaining why a computer made of beer cans and toilet paper couldn't be conscious no matter how brilliantly it behaved because it just couldn't; and besides an intelligent beer can would be strange and strange things can't happen. Time management in action. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jan 3 01:48:13 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jan 2010 12:48:13 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <68150.34111.qm@web36508.mail.mud.yahoo.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/3 Gordon Swobe : > --- On Fri, 1/1/10, Stathis Papaioannou wrote: > >> The only certain way to simulate a brain is to simulate the activity >> of neurons at the molecular level. > > I agree with your general direction but I wonder how you know we needn't simulate them at the atomic or subatomic level. How do you know it's not turtles all the way down? Of course, the behaviour of molecules reduces to the behaviour of atoms and subatomic particles but the models of computational chemistry should take this into account. We know from experiments that some shortcuts are allowed: for example, radiolabeled biologically active molecules seem to behave normally, indicating that we don't always need to take into account what goes on at the nuclear level. I'm sure there will be other shortcuts allowing modelling above the molecular level, but what these shortcuts will be will require experiment, comparing the model with the real thing and seeing if they match. > At the end of that philosophical tunnel, the simulation of the thing finally becomes the thing it simulates. Form and matter converge. A computer simulation, however faithful, will not be identical to the real thing, as you have correctly pointed out before. However, this does not mean that a simulation cannot perform a function of the real thing. A simulated clock can tell time as well as an analogue clock. In fact, we don't use the term "simulated clock": we say that there are analogue clocks and digital clocks, and both clocks tell time. Similarly, a simulated brain is not identical with a biological brain, but it might perform the same function as a biological brain. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 3 03:08:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jan 2010 14:08:47 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <4B3F9D05.40903@rawbw.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> Message-ID: 2010/1/3 Lee Corbin : Good to hear from you again, Lee. > For those of us who are functionalists (or, in my case, almost > 100% functionalists), it seems almost inconceivable that the causal > components of an entity's having an experience require anything > beneath the neuron level. In fact, it's very likely that the > simulation of whole neuron tracks or bundles suffice. The only reason to simulate the internal processes of a neuron is that you can't otherwise be sure what it's going to do. For example, the neuron may have decided, in response to past events and because of the type of neuron it is, that it is going to increase production of dopamine receptors, decrease production of MAO and increase production of COMT (both enzymes that break down dopamine and other catecholamines). This is going to change the neuron's sensitivity to dopamine in a complex way, and therefore the neuron's behaviour, and therefore the whole brain's behaviour. In your model of the neuron you need a "sensitivity to dopamine" function which takes as variables the neuron's present state and all the inputs acting on it. If you can figure out what this function is by treating the neuron as a black box then, implicitly, you have modelled its internal processes even though you might not know what dopamine receptors, COMT or MAO are. However, it might be easier to get this function if you model the internal processes explicitly. I could go further and say that it isn't necessary even to simulate the behaviour of a neuron in order to simulate the brain. You could use cubic millimetres of brain tissue as the basic unit, ignoring natural biological boundaries such as cell membranes. If you can predict the cube's outputs in response to inputs, you can predict the behaviour of the whole brain. But for practical reasons, it would be easier to do the modelling at least at the cellular level. > But I have no way of going forward to address Gordon's > question. Logically, we have no way of knowing that in > order to emulate experience, you have to simulate every > single gluon, muon, quark, and ?electron. However, we > can *never* in principle (so far as I can see) begin to > answer that question, because ultimately, all we'll > finally have to go on is behavior (with only a slight > glance at the insides). I think the argument from partial brain replacement that I have put forward to Gordon shows that if you can reproduce the behaviour of the brain, then you necessarily also reproduce the consciousness. Simulating neurons and molecules is just a means to this end. -- Stathis Papaioannou From jonkc at bellsouth.net Sun Jan 3 06:40:25 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 01:40:25 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: On Jan 1, 2010, Dave Sill wrote: >> From that I conclude that intelligent behavior must produce consciousness. > > OK, here you lost me. I don't see how you can say anything stronger > than "intelligent behavior *can* produce consciousness". If consciousness were not linked with intelligence it would not exist on this planet. Even if consciousness happened by pure chance it wouldn't last because it would have zero survival value, it would fade away through genetic drift just as the eyes of cave creatures disappear because they are a completely useless aid to survival. In spite of all this right now, a half a billion years after evolution invented brains, I am conscious and you may be too. I can only conclude that consciousness is a byproduct of intelligence, it is the way data feels when it is being processed. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Jan 3 07:03:02 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 02:03:02 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> Message-ID: On Jan 1, 2010, Stathis Papaioannou wrote: > organic brains do better than computers at the highest level of > mathematical creativity. Creativity is a moving target, as soon as a computer can do something pundits tell us that the thing in question wasn't really creative after all. > it is this rather than the ability to have feelings, produce art etc. that Roger Penrose used in > his case against AI. The thing I don't understand is that if the human brain makes use of quantum mechanical principles to work its magic why can't we factor numbers better than computers? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jan 3 09:35:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jan 2010 20:35:08 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <688803.71262.qm@web36507.mail.mud.yahoo.com> References: <688803.71262.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/1/3 Gordon Swobe : > --- On Fri, 1/1/10, Stathis Papaioannou wrote: > >> Right, I asked you the question from the point of view of >> a concrete-thinking technician. This simpleton sets about >> building artificial neurons from parts he buys at Radio Shack >> without it even occurring to him that the programs these parts run >> are formal descriptions of real or supposed objects which simulate but >> do not equal the objects. When he is happy that his artificial >> neurons behave just like the real thing he has his friend the surgeon, >> also technically competent but not philosophically inclined, >> install them in the brain of a patient rendered aphasic after a stroke. > > The surgeon replaces all those neurons relevant to correcting the patient's aphasia with a-neurons programmed and configured in such a way that the patient will pass the Turing test while appearing normal and healthy. We don't know in 2009 if this requires work in areas outside Wernicke's but we'll assume our surgeon here knows. > > The TT and the subject's reported symptoms represent the surgeon's only means of measuring the supposed health of his patient. > >> We can add a second part to the experiment in which the technician >> builds another set of artificial neurons based on clockwork nanomachinery >> rather than digital circuits and has them installed in a second >> patient, the idea being that the clockwork neurons do not run formal >> programs. > > A second surgeon does the same with this patient, releasing him from the hospital after he appears healthy and passes the TT. > >> You then get to talk to the patients. Will both patients be >> able to speak equally well? > > Yes. > >> If so, would it be right to say that one understands what he is saying >> and the other doesn't? > > Yes. On Searle's view the TT gives false positives for the first patient. > >> Will the patient with the clockwork neurons report he feels normal while >> the other one reports he feels weird? Surely you should be able to >> observe *something*. > > If either one appears or reports feeling abnormal, we send him back to the hospital. Thank-you for clearly answering the question. Now some problems. Firstly, I understand that you have no philosophical objection to the idea that the clockwork neurons *could* have consciousness, but you don't think that they *must* have consciousness, since you don't (to this point) believe as I do that behaving like normal neurons is sufficient for this conclusion. Is that right? Moreover, if consciousness is linked to substrate rather than function then it is possible that the clockwork neurons are conscious but with a different type of consciousness. Secondly, suppose we agree that clockwork neurons can give rise to consciousness. What would happen if they looked like conventional clockwork at one level but at higher resolution we could see that they were driven by digital circuits, like the digital mechanism driving most modern clocks with analogue displays? That is, would the low level computations going on in these neurons be enough to change or eliminate their consciousness? Finally, the most important point. The patient with the computerised neurons behaves normally and says he feels normal. Moreover, he actually believes he feels normal and that he understands everything said to him, since otherwise he would tell us something is wrong. He processes the verbal information processed in the artificial part of his brain (Wernicke's area) and passed to the rest of his brain normally: for example, if you describe a scene he can draw a picture of it, if you tell him something amusing he will laugh, and if you describe a complex problem he will think about it and propose a solution. But despite this, he will understand nothing, and will simply have the delusional belief that he has normal understanding. Or in the case with the clockwork neurons, he may have an alien type of understanding, but again behave normally and have the delusional belief that his understanding is normal. That a person could be a zombie and not know it is logically possible, since a zombie by definition doesn't know anything; but that a person could be a partial zombie and be systematically unaware of this even with the non-zombified part of his brain seems to me incoherent. How do you know that you're not a partial zombie now, unable to understand anything you are reading? What reason is there to prefer normal neurons to computerised zombie neurons given that neither you nor anyone else can ever notice a difference? This is how far you have to go in order to maintain the belief that neural function and consciousness can be separated. So why not accept the simpler, logically consistent and scientifically plausible explanation that is functionalism? I suppose at this point you might return to the original claim, that semantics cannot be derived from syntax, and argue that it is strong enough to justify even such weirdness as partial zombies. But this isn't the case. I actually believe that semantics can *only* come from syntax, but if it can't, your fallback is that semantics comes from the physical activity inside brains. Thus, even accepting Searle's argument, there is no *logical* reason why semantics could not derive from other physical activity, such as the physical activity in a computer implementing a program. -- Stathis Papaioannou From dharris at livelib.com Sun Jan 3 10:47:21 2010 From: dharris at livelib.com (David C. Harris) Date: Sun, 03 Jan 2010 02:47:21 -0800 Subject: [ExI] quiz for the new year In-Reply-To: References: Message-ID: <4B4075B9.2070902@livelib.com> spike wrote: > >Jack is looking at Anne, but Anne is looking at George. Jack is > married but George is not. Is a married person looking at an unmarried > person? > > >Yes No Cannot be determined > > ... > ... excellent article below on irrationality in smart people: > > http://www.magazine.utoronto.ca/feature/why-people-are-irrational-kurt-kleiner/ > > How many answered it correctly? > > How many of you horndogs are like me, pondering the comely figure of > Anne, instead of concentrating your intelligence on being rational? > > spike Excellent article indeed! I initially answered wrong, then analyzed the T/F cases for Anne's marriedness when I read your endorsement of YES and then wondered "now why did I do that wrong and feel so confident?" I first noticed a discrepancy between normal logic and my ability to detect correct answers when I took touch typing, after I became enamored with the potential of computer keyboards. My fingers responded to characters I saw WITHOUT MY MIND being aware of choice to use a particular finger and motion. Another discrepancy occurred with the Miller Analogy Test, a test of verbal analogies. I do those VERY well, scoring in the top 1% of the highest reference group (psychiatric trainees). But I noticed that I was picking the right answers without knowing explicitly why I chose those answers. This felt like another "bypass" of explicit personal control. I'm particularly interested in what the article calls "mindware", which probably overlaps with mathematics: representations and methods of processing that lead us to better answers. I've benefited greatly from using Venn diagrams and from checking the units in science calculations (e.g. there is confusion in some of the global warming discussions when people use "kiloWatts" as if it meant "kiloWatt hours"). With this little puzzle, what would be a good mindware tool to use? I built a graph of the "looks at" relationships, but didn't realize I'd need to examine the two values of "married" for Anne. - David Harris, Palo Alto From stathisp at gmail.com Sun Jan 3 11:41:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jan 2010 22:41:19 +1100 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/3 John Clark : > If consciousness were not linked with intelligence it would not exist on > this planet. Even if consciousness happened by pure chance it wouldn't last > because it would have zero survival value, it would fade away through > genetic drift just as the eyes of cave creatures disappear because they are > a completely useless aid to survival. In spite of all this right now, a half > a billion years after evolution invented brains, I am conscious and you may > be too. I can only conclude that consciousness is a byproduct of > intelligence, it is the way data feels when it is being processed. This is not true if it is impossible to create intelligent behaviour without consciousness using biochemistry, but possible using electronics, which evolution had no access to. I point this out only for the sake of logical completeness, not because I think it is plausible. -- Stathis Papaioannou From bbenzai at yahoo.com Sun Jan 3 15:36:16 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 3 Jan 2010 07:36:16 -0800 (PST) Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: Message-ID: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Aware wrote: > > Offered, for your consideration, a wondrous land where its > denizens > share an appreciation of novelty, and discovery of the > subtle but > meaningful patterns underlying it all. ?You're entering a > dimension > not only of sight and sound, but of mind. ?This land is a > refuge for > the rare race known as the xNTx, and into this land you, an > ISTJ or > Inspector Guardian, have just stumbled. ?That's the > signpost just > ahead, your next stop, the Extropy list. > > Guardian Swobe, as an ISTJ, in your travels among > rationalists, idealists, and > artisans, you might do well to share this: > > > - Jef (INTJ) Hm. I'm not at all convinced by these personality tests. Every time I've tried a Myers-Briggs test (being just as vain as everyone else), I've got a different result. So far, I'm INTP, INFJ, and INFP, so rather than an xNTx, I seem to be a INxx. Does that mean anything? I'm starting to doubt it. Or maybe it would mean something, if someone could come up with a good set of questions. Usually, at least a few of the questions are silly or unanswerable, so you just have to pick one without worrying too much about it. Also, the summaries remind me more of a horoscope than anything. Why do I never read anything bad about myself? That's suspicious, I'm not so vain as to think I don't have bad points. Haven't been able to try the Kiersey test, the guy seems a bit precious about it, and makes people take down sites that offer free versions of it. Which is reason enough to dismiss it, imo. The distinctions seem a bit silly, too. e.g. N/S: you can either be Insrospective OR Observant. ??? What if you are an observant introspective person? It all seems an attempt to force people into categories that are too rigidly defined (where's the category for anti-authoritarian contrarians?). Ben Zaiboc (IENSFTPJ) From gts_2000 at yahoo.com Sun Jan 3 16:20:14 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 08:20:14 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <867961.71243.qm@web36504.mail.mud.yahoo.com> --- On Sun, 1/3/10, Stathis Papaioannou wrote: > Thank-you for clearly answering the question. Welcome. suggested abbreviations and conventions: m-neurons = material ("clockwork") artificial neurons p-neurons = programmatic artificial neurons Sam = the patient with the m-neurons Cram = the patient with the p-neurons (CRA-man) (If Sam and Cram look familiar it's because I used these names in a similar thought experiment of my own design.) > Firstly, I understand that you have no philosophical > objection to the idea that the clockwork neurons *could* have > consciousness, but you don't think that they *must* have consciousness, > since you don't (to this point) believe as I do that behaving like normal > neurons is sufficient for this conclusion. Is that right? No, because I reject epiphenomenalism I think Sam cannot pass the TT without genuine intentionality. If Sam's m-neurons fail to result in a passing TT score for Sam then we have no choice but to take his m-neurons back to the store and demand a refund. > Moreover, if consciousness is linked to substrate rather than function > then it is possible that the clockwork neurons are conscious but with > a different type of consciousness. If Sam passes the TT and reports normal subjective experiences from m-neurons then I will consider him cured. I have no concerns about "type" of consciousness. > Secondly, suppose we agree that clockwork neurons can give > rise to consciousness. What would happen if they looked like > conventional clockwork at one level but at higher resolution we could > see that they were driven by digital circuits, like the digital mechanism > driving most modern clocks with analogue displays? That is, would > the low level computations going on in these neurons be enough to > change or eliminate their consciousness? Yes. In that case the salesperson deceived us. He sold us p-neurons in a box labeled m-neurons. And if we cannot detect the digital nature of these neurons from careful physical inspection and must instead conceive of some digital platonic realm that drives or causes material objects then you will have introduced into our experiment the quasi-religious philosophical idea of substance or property dualism. > Finally, the most important point. The patient with the computerised > neurons behaves normally and says he feels normal. Yes. > Moreover, he actually believes he feels normal and that he understands > everything said to him, since otherwise he would tell us something is > wrong. No, he does not "actually" believe anything. He merely reports that he feels normal and reports that he understands. His surgeon programmed all p-neurons such that he would pass the TT and report healthy intentionality, including but not limited to p-neurons in Wernicke's area. > He processes the verbal information processed in the artificial part of > his brain (Wernicke's area) and passed to the rest of his brain > normally: for example, if you describe a scene he can draw > a picture of it, if you tell him something amusing he will laugh, and > if you describe a complex problem he will think about it and > propose a solution. But despite this, he will understand nothing, and > will simply have the delusional belief... He will have no conscious beliefs delusional or otherwise. > That a person could be a zombie and not know it is > logically possible, since a zombie by definition doesn't know anything; > but that a person could be a partial zombie and be systematically > unaware of this even with the non-zombified part of his brain seems to me > incoherent. I see nothing incoherent about it except when you ask me to imagine the unimaginable as you did your last thought experiment. In effect, the relevant parts of Cram's brain act like a computer, or mesh of computers, that run programs. That computer network receives symbolic inputs and generates symbolic outputs. Cram passes the TT yet he has no grasp of the meanings of the symbols his computerized brain manipulates. And if the surgeon programmed the p-neurons correctly then those parts of Cram's brain associated with "reporting subjective feelings" will run programs that ensure Cram will talk very much like Sam. We cannot distinguish Cram from Sam except with philosophical arguments. If we can then one patient or the other has not overcome his illness. One surgeon or the other failed to do his job. > How do you know that you're not a partial zombie now, unable to > understand anything you are reading? I know because I do understand your words and I know I do, (contrast this with your last experiment in which I could not even say with certainty that I existed, much less that I could understand anything). > What reason is there to prefer normal neurons to computerised zombie > neurons given that neither you nor anyone else can ever notice a > difference? I notice the difference and I prefer existence. > This is how far you have to go in order to maintain the belief that > neural function and consciousness can be separated. So why not accept the > simpler, logically consistent and scientifically plausible > explanation that is functionalism? You assume here that I have followed your argument. > I actually believe that semantics can *only* come from syntax, As a programmer of syntax I want to believe that too. Hasn't happened. :) -gts From jonkc at bellsouth.net Sun Jan 3 16:39:21 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 11:39:21 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: On Jan 3, 2010, Stathis Papaioannou wrote: > This is not true if it is impossible to create intelligent behaviour > without consciousness using biochemistry, but possible using > electronics, which evolution had no access to. I point this out only > for the sake of logical completeness, not because I think it is > plausible. Even in that case it would indicate that it would be easier to make a conscious intelligence than a unconscious one, so it would seem wise that when you encounter intelligence your default assumption should be that consciousness is behind it. Searle assumes the opposite, he assumes unconsciousness regardless of how brilliant an intelligence may be unless consciousness is proven; the catch-22 of course is that consciousness can never be proven. Also, if it's the biochemistry inside the neuron that mysteriously generates consciousness and not the signals between neurons that a computer could simulate then each neuron is on its own as far as consciousness is concerned. One neuron would be sufficient to produce consciousness, it would have to be because they can't work together on this project. If you allow one neuron to have consciousness even though it has no intelligence it would be a very small step to insist that rocks which are no dumber than neurons have it too. So now we have intelligence without consciousness and consciousness without intelligence and rocks with feelings; that is not a position I'd be comfortable defending. As I said before creationists correctly say that life and intelligence are too grand to have come about by chance, but Searle says that's exactly how biology came up with consciousness. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 3 16:14:11 2010 From: spike66 at att.net (spike) Date: Sun, 3 Jan 2010 08:14:11 -0800 Subject: [ExI] quiz for the new year In-Reply-To: <4B4075B9.2070902@livelib.com> References: <4B4075B9.2070902@livelib.com> Message-ID: <3D018AC14F854C5F824D2203728E6109@spike> > ...On Behalf Of David C. Harris. > > ... > > ... excellent article below on irrationality in smart people: > > http://www.magazine.utoronto.ca/feature/why-people-are-irrational-kurt-klein er/ ... > > spike > Excellent article indeed! I initially answered wrong... Thanks, me too. > I first noticed a discrepancy between normal logic and my > ability to detect correct answers when I took touch typing, > after I became enamored with the potential of computer > keyboards... WOW you must be nearly as old as I am. Modern people don't take touch typing, rather they seem to be born knowing the QWERTY arrangement. My son is 3.5 and is already demonstrating some proficiency. > My fingers responded to characters I saw WITHOUT > MY MIND being aware of choice to use a particular finger and motion... Same here. When one can think thru a keyboard, one's writing becomes so much less labored and in many cases filled with silliness, as I have demonstrated in this forum. > ... > (e.g. there is confusion in some of the global warming > discussions when people use "kiloWatts" as if it meant > "kiloWatt hours")... Ja, it is an after-effect of our government confusing the units of jobs created/saved when it meant job-hours created or saved: http://origins.recovery.gov/Pages/home.aspx > ...I built a graph of the "looks at" relationships, but > didn't realize I'd need to examine the two values of > "married" for Anne. - David Harris, Palo Alto The name Anne has been forever associated in my mind with the dazzling Anne Hathaway: http://images.google.com/images?hl=en&source=hp&q=anne+hathaway&rlz=1W1GGLL_ en&um=1&ie=UTF-8&ei=MMBAS6KXDoGmsgOek5zLBA&sa=X&oi=image_result_group&ct=tit le&resnum=1&ved=0CB0QsAQwAA David, I see you are from Palo Alto. We should gather the local ExI-chatters for sushi or something. Regarding the point of your post, the curious discrepancy between intelligence and rationality, it is something I have observed and pondered for some time. Our IQ tests do nothing to measure or indicate the level of rationality. I would be interested in figuring out a way to create an RQ test. Any ideas? spike From gts_2000 at yahoo.com Sun Jan 3 17:39:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 09:39:33 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <348177.73638.qm@web36506.mail.mud.yahoo.com> --- On Sat, 1/2/10, John Clark wrote: >> If you think I have an issue with Darwin then either you don't >> understand me or you don't understand Darwin. > I'm sure emotionally you side with Darwin, but > you haven't pondered his ideas in any depth because if > you had you'd know that the idea that consciousness and > intelligence can be separated and are caused by processes > that have nothing to do with each other is 100% > contradictory to Darwin's insight. You misunderstand me if you I think I believe consciousness and intelligence exist "separately" in humans or other animals. For most purposes we can consider them near synonyms or at least as handmaidens. The distinction does however become important in the context of strong AI research. Symbol grounding requires the sort of subjective first-person perspective that evolved in these machines we call humans, and which probably also evolved in other species. If we can duplicate it in software/hardware systems then they can have strong AI. Not really a complicated idea. -gts From jonkc at bellsouth.net Sun Jan 3 18:37:50 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 13:37:50 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <348177.73638.qm@web36506.mail.mud.yahoo.com> References: <348177.73638.qm@web36506.mail.mud.yahoo.com> Message-ID: <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> On Jan 3, 2010, at 12:39 PM, Gordon Swobe wrote: > You misunderstand me if you I think I believe consciousness and intelligence exist "separately" in humans or other animals. For most purposes we can consider them near synonyms or at least as handmaidens. Well that's a start. > The distinction does however become important in the context of strong AI research. Symbol grounding requires the sort of subjective first-person perspective that evolved in these machines we call humans The operative word in the above is "evolved". Why did this mysterious "subjective symbol grounding" (bafflegab translation: consciousness) evolve? Not only can't you explain how this thing is supposed to work you can't explain how it came to be. Certainly Darwin would be no help as it would have absolutely no effect on behavior, in fact that is precisely why you think the Turing Test doesn't work. And even if it came about by pure chance it wouldn't last, in fact it would be detrimental as the resources used to generate consciousness could better be used for things that actually did something, like help get genes into the next generation. And yet consciousness exists? Why? We don't know a lot about consciousness but one of the few things we do know is that Darwin is screaming that intelligence and consciousness are two sides of the same coin. John K Clark > , and which probably also evolved in other species. If we can duplicate it in software/hardware systems then they can have strong AI. Not really a complicated idea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Jan 3 18:38:27 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 10:38:27 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <155704.37523.qm@web36508.mail.mud.yahoo.com> --- On Sun, 1/3/10, John Clark wrote: > As I said before creationists correctly say that > life and intelligence are too grand to have come about by > chance, but Searle says that's exactly how biology came > up with consciousness. Simply not so, John. Consciousness evolved as an adaptive trait to aid intelligence. An amoeba has intelligence in so much as it responds in intelligent ways to its environment, for example in ways that help it find nourishment, but it does not appear to have much consciousness assuming it has any at all. Having no nervous system, it appears to have only what we might call instinctive or unconscious intelligence. Not unlike a computer. Higher organisms like us have intelligence enhanced with consciousness. They can ground symbols do other things that many would like to see computers do. -gts From thespike at satx.rr.com Sun Jan 3 18:45:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 03 Jan 2010 12:45:17 -0600 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> References: <348177.73638.qm@web36506.mail.mud.yahoo.com> <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> Message-ID: <4B40E5BD.1050700@satx.rr.com> On 1/3/2010 12:37 PM, John Clark wrote: > intelligence and consciousness are two sides of the same coin. two sides of the same koan From jonkc at bellsouth.net Sun Jan 3 18:45:26 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 13:45:26 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <155704.37523.qm@web36508.mail.mud.yahoo.com> References: <155704.37523.qm@web36508.mail.mud.yahoo.com> Message-ID: <9350929F-5078-4AB7-89A3-C18A1F625929@bellsouth.net> On Jan 3, 2010, Gordon Swobe wrote: > Consciousness evolved as an adaptive trait to aid intelligence. Then the Turing Test works. You can't have it both ways! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 3 18:46:47 2010 From: spike66 at att.net (spike) Date: Sun, 3 Jan 2010 10:46:47 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <501890.28524.qm@web113619.mail.gq1.yahoo.com> References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Message-ID: > ---...On Behalf Of Ben Zaiboc > ... > > Hm. > > ...Every > time I've tried a Myers-Briggs test (being just as vain as > everyone else), I've got a different result. So far, I'm > INTP, INFJ, and INFP, so rather than an xNTx, I seem to be a > INxx. Does that mean anything?... Multiple personality disorder? {8^D One's score to a large extend does depend on one's mood at the moment, but it is just a game. It would be entertaining to try to create a pool of each of the 16 segments, then try to derive questions that would consistently identify each. I come out pretty consistently INTP or ENTP, depending on my mood. > > ... > > The distinctions seem a bit silly, too. e.g. N/S: you can > either be Insrospective OR Observant. ??? What if you are > an observant introspective person? > > It all seems an attempt to force people into categories that > are too rigidly defined (where's the category for > anti-authoritarian contrarians?). > > Ben Zaiboc > (IENSFTPJ) Again, it is just a game, invented my sociologist types. Notice also a similarity with horoscopes: the description of each category is very general and at least moderately flattering. An explanation of the popularity of the game might be that everyone is pleased with the description they see. It would be cool to try to design a four-bit identifier game designed by engineers and scientists. Secondly, as a little joke, make the description of each category a biting criticism, such as the internet gag-horoscopes that went around a few years ago, where the horoscopes started out with the usual mush, but progressed toward ending comments such as "those who know you well consider you an arrogant asshole." {8^D Does anyone here remember that game? spike From gts_2000 at yahoo.com Sun Jan 3 19:42:19 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 11:42:19 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> Message-ID: <392806.41976.qm@web36507.mail.mud.yahoo.com> --- On Sun, 1/3/10, John Clark wrote: > The operative word in the above is "evolved". Why did this mysterious > "subjective symbol grounding" (bafflegab translation: consciousness) > evolve? To help you communicate better with other monkeys, among other things. I think you really want to ask how it happened that humans did not evolve as unconscious zombies. Why did evolution select consciousness? I think one good answer is that perhaps nature finds it cheaper when its creatures have first-person awareness of the things they do and say. We would probably find it more efficient in computers also. We just need to figure out what nature did, and then do something similar. -gts From gts_2000 at yahoo.com Sun Jan 3 21:07:10 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 13:07:10 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> Message-ID: <822376.84128.qm@web36501.mail.mud.yahoo.com> --- On Sun, 1/3/10, John Clark wrote: > Darwin is screaming that intelligence and consciousness are > two sides of the same coin. You're screaming it but I don't hear Darwin screaming it. Again amoebas appear to have intelligence but most people including me would find themselves hard-pressed to say they have what I mean by consciousness. I think Darwin and Searle would enjoy each other's company and never find any reason for disagreement. Searle: "Some people think human brains work just like computers, but I reject the computationalist theory as false." Darwin: "What the heck is a computer, and why should anyone believe my theory of evolution gives a hoot about them?" Might make for an interesting conversation. -gts From brentn at freeshell.org Sun Jan 3 22:50:20 2010 From: brentn at freeshell.org (Brent Neal) Date: Sun, 3 Jan 2010 17:50:20 -0500 Subject: [ExI] Elemental abundances Message-ID: <2592DBB0-C6CC-45F8-A28F-D79A3A57C61D@freeshell.org> Does anyone have a lead on a source, preferably academic in quality, for the relative elemental abundances in the inner solar system out to Z=92? All the data I've been able to find thus far talks about either the Earth's crust or stops with Z=35. Thanks! Brent -- Brent Neal, Ph.D. http://brentn.freeshell.org From pharos at gmail.com Sun Jan 3 23:31:20 2010 From: pharos at gmail.com (BillK) Date: Sun, 3 Jan 2010 23:31:20 +0000 Subject: [ExI] Elemental abundances In-Reply-To: <2592DBB0-C6CC-45F8-A28F-D79A3A57C61D@freeshell.org> References: <2592DBB0-C6CC-45F8-A28F-D79A3A57C61D@freeshell.org> Message-ID: On 1/3/10, Brent Neal wrote: > Does anyone have a lead on a source, preferably academic in quality, for the > relative elemental abundances in the inner solar system out to Z=92? All the > data I've been able to find thus far talks about either the Earth's crust or > stops with Z=35. > > Does this help? Quote: Elements with atomic numbers 43, 61, 84?89, and 91 have no stable or long-lived isotopes, and therefore have vanishingly small abundances. ---------- BillK From gts_2000 at yahoo.com Mon Jan 4 02:01:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 18:01:36 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <201790.46747.qm@web36504.mail.mud.yahoo.com> --- On Sun, 1/3/10, Stathis Papaioannou wrote: Revisiting this question: > Firstly, I understand that you have no philosophical > objection to the idea that the clockwork neurons *could* have > consciousness, but you don't think that they *must* have consciousness, > since you don't (to this point) believe as I do that behaving like normal > neurons is sufficient for this conclusion. Is that right? In my last to you I referred to the m-neurons actually used in the experiment. They either work in which case the patient passes the TT and reports normal intentionality and gets released from the hospital, or they don't. But in re-reading your words I understand that you really want to know if I agree that they needn't work or fail solely by virtue of their inputs and outputs. Yes I agree with that, as you already know. We simply do not know what neurons must contain to allow a brain to become conscious, but I'd bet that artificial neurons stuffed only with mashed potatoes and gravy won't do the trick, even if we engineer somehow at the edge to output the correct neurotransmitters into the synapses. > I actually believe that semantics can *only* come from > syntax, but if it can't, your fallback is that semantics > comes from the physical activity inside brains. Something along those lines, yes. But we can't paste form onto substance and expect intrinsic intentionality, and that's all formal programs do to hardware substance. We might just as well write a letter and expect the letter to understand the words. -gts From brentn at freeshell.org Mon Jan 4 02:23:44 2010 From: brentn at freeshell.org (Brent Neal) Date: Sun, 3 Jan 2010 21:23:44 -0500 Subject: [ExI] Elemental abundances In-Reply-To: References: <2592DBB0-C6CC-45F8-A28F-D79A3A57C61D@freeshell.org> Message-ID: <1294BDB8-F242-40DA-BE58-206654BF87AF@freeshell.org> On 3 Jan, 2010, at 18:31, BillK wrote: > Does this help? > > That's close to what I'm looking for. I may just have to pull a copy of the referenced book. I was hoping for something that referenced primary sources that had not only a good graph, but also measurement errors and distributions. Thanks for the link! B -- Brent Neal, Ph.D. http://brentn.freeshell.org From emlynoregan at gmail.com Mon Jan 4 06:42:12 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 4 Jan 2010 17:12:12 +1030 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Message-ID: <710b78fc1001032242j5b02824fxc64324484f0a5cfe@mail.gmail.com> > Again, it is just a game, invented my sociologist types. ?Notice also a > similarity with horoscopes: the description of each category is very general > and at least moderately flattering. ?An explanation of the popularity of the > game might be that everyone is pleased with the description they see. > > It would be cool to try to design a four-bit identifier game designed by > engineers and scientists. ?Secondly, as a little joke, make the description > of each category a biting criticism, such as the internet gag-horoscopes > that went around a few years ago, where the horoscopes started out with the > usual mush, but progressed toward ending comments such as "those who know > you well consider you an arrogant asshole." ?{8^D ?Does anyone here remember > that game? > > spike For a system designed by scientists (well, psychologists), how about the big 5 personality traits? http://en.wikipedia.org/wiki/Big_Five_personality_traits "The Big Five model is considered to be one of the most comprehensive, empirical, data-driven research findings in the history of personality psychology." Elements are: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From emlynoregan at gmail.com Mon Jan 4 06:52:37 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 4 Jan 2010 17:22:37 +1030 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <710b78fc1001032242j5b02824fxc64324484f0a5cfe@mail.gmail.com> References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> <710b78fc1001032242j5b02824fxc64324484f0a5cfe@mail.gmail.com> Message-ID: <710b78fc1001032252n5b301668iabc1e6dc779eb174@mail.gmail.com> 2010/1/4 Emlyn : >> Again, it is just a game, invented my sociologist types. ?Notice also a >> similarity with horoscopes: the description of each category is very general >> and at least moderately flattering. ?An explanation of the popularity of the >> game might be that everyone is pleased with the description they see. >> >> It would be cool to try to design a four-bit identifier game designed by >> engineers and scientists. ?Secondly, as a little joke, make the description >> of each category a biting criticism, such as the internet gag-horoscopes >> that went around a few years ago, where the horoscopes started out with the >> usual mush, but progressed toward ending comments such as "those who know >> you well consider you an arrogant asshole." ?{8^D ?Does anyone here remember >> that game? >> >> spike > > For a system designed by scientists (well, psychologists), how about > the big 5 personality traits? > > http://en.wikipedia.org/wiki/Big_Five_personality_traits > > "The Big Five model is considered to be one of the most comprehensive, > empirical, data-driven research findings in the history of personality > psychology." > > Elements are: Openness, Conscientiousness, Extraversion, > Agreeableness, and Neuroticism Oh, also, here's an online test: http://www.outofservice.com/bigfive/ And my results :-) http://www.outofservice.com/bigfive/results/?oR=0.95&cR=0.472&eR=0.562&aR=0.722&nR=0.281&y=1970&g=m -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From stefano.vaj at gmail.com Mon Jan 4 11:47:36 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 4 Jan 2010 12:47:36 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> Message-ID: <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> 2010/1/3 Stathis Papaioannou : > I think the argument from partial brain replacement that I have put > forward to Gordon shows that if you can reproduce the behaviour of the > brain, then you necessarily also reproduce the consciousness. > Simulating neurons and molecules is just a means to this end. "Consciousness" being hard to define as else than a social construct and a projection (and a pretty vague one, for that matter, inasmuch as it should be extensible to fruitflies...), the real point of the exercise is simply to emulate "organic-like" computational abilities with acceptable performances, brain-like architectures being demonstrably not too bad at the task. I do not really see anything that suggests that we could not do everything in software with a PC, a Chinese Room or a cellular automaton, without emulating *absolutely anything* of the actual working of brains... -- Stefano Vaj From stathisp at gmail.com Mon Jan 4 11:50:36 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jan 2010 22:50:36 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <867961.71243.qm@web36504.mail.mud.yahoo.com> References: <867961.71243.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/4 Gordon Swobe : > suggested abbreviations and conventions: > > m-neurons = material ("clockwork") artificial neurons > p-neurons = programmatic artificial neurons I'll add two more: b-neurons = biological neurons c-neurons = consciousness-capable neurons You claim: all b-neurons are c-neurons some m-neurons are c-neurons no p-neurons are c-neurons > Sam = the patient with the m-neurons > Cram = the patient with the p-neurons (CRA-man) > > (If Sam and Cram look familiar it's because I used these names in a similar thought experiment of my own design.) > >> Firstly, I understand that you have no philosophical >> objection to the idea that the clockwork neurons *could* have >> consciousness, but you don't think that they *must* have consciousness, >> since you don't (to this point) believe as I do that behaving like normal >> neurons is sufficient for this conclusion. Is that right? > > No, because I reject epiphenomenalism I think Sam cannot pass the TT without genuine intentionality. If Sam's m-neurons fail to result in a passing TT score for Sam then we have no choice but to take his m-neurons back to the store and demand a refund. It seems to me you must accept some type of epiphenomenalism if you say that Cram can pass the TT while having different experiences to Sam. This also makes it impossible to ever study the NCC scientifically. This experiment would be the ideal test for it: the p-neurons function like c-neurons but without the NCC, yet Cram behaves the same as Sam. There is therefore no way of knowing that you have actually taken out the NCC. >> Moreover, if consciousness is linked to substrate rather than function >> then it is possible that the clockwork neurons are conscious but with >> a different type of consciousness. > > If Sam passes the TT and reports normal subjective experiences from m-neurons then I will consider him cured. I have no concerns about "type" of consciousness. As you agreed in a later post, only some m-neurons are c-neurons. It could be that an internal change in a m-neuron could turn it from a c-neuron to a ~c-neuron. But it seems you are saying there is no in between state: it is either a c-neuron or a ~c-neuron. Moreover, you seem to be saying that there is only one type of c-neuron that could fill the shoes of the original b-neuron, although presumably there are different m-neurons that could give rise to this c-neuron. Is that right? >> Secondly, suppose we agree that clockwork neurons can give >> rise to consciousness. What would happen if they looked like >> conventional clockwork at one level but at higher resolution we could >> see that they were driven by digital circuits, like the digital mechanism >> driving most modern clocks with analogue displays? That is, would >> the low level computations going on in these neurons be enough to >> change or eliminate their consciousness? > > Yes. In that case the salesperson deceived us. He sold us p-neurons in a box labeled m-neurons. And if we cannot detect the digital nature of these neurons from careful physical inspection and must instead conceive of some digital platonic realm that drives or causes material objects then you will have introduced into our experiment the quasi-religious philosophical idea of substance or property dualism. Suppose the m-neuron (which is a c-neuron) contains a mechanism to open and close sodium channels depending on the transmembrane potential difference. Would changing from an analogue circuit to a digital circuit for just this mechanism change the neuron from a c-neuron to a ~c-neuron? If not, then we could go about systematically replacing the analogue subsystems in the neuron until we have a pure p-neuron. At some point, according to what you have been saying, the neuron would suddenly switch from being a c-neuron to a ~c-neuron. Is it plausible that changing, say, one op-amp out of billions would have such drastic effect? On the other hand, what could it mean if the neuron's (and hence the person's) consciousness smoothly decreased in proportion to its degree of computerisation? >> Finally, the most important point. The patient with the computerised >> neurons behaves normally and says he feels normal. > > Yes. > >> Moreover, he actually believes he feels normal and that he understands >> everything said to him, since otherwise he would tell us something is >> wrong. > > No, he does not "actually" believe anything. He merely reports that he feels normal and reports that he understands. His surgeon programmed all p-neurons such that he would pass the TT and report healthy intentionality, including but not limited to p-neurons in Wernicke's area. This is why the experiment considers *partial* replacement. Even before the operation Cram is not a zombie: despite not understanding language he can see, hear, feel, recognise people and objects, understand that he is sick in hospital with a stroke, and he certainly knows that he is conscious. After the operation he has the same feelings, but in addition he is pleased to find that he now understands what people say to him, just as he remembers before the stroke. That is, he behaves as if he understands what people say to him and he honestly believes that he understands what people say to him; whereas before the operation he behaves as if he lacks understanding and he knows that he lacks understanding, since when people speak to him it sounds like gibberish. So the post-op Cram is a very strange creature: he can have a normal conversation, appearing to understand everything said to him, honestly believing that he understands everything said to him, while in fact he doesn't understand a word. On the above account, it is difficult to make any sense of the word "understanding". Surely a person who believes he understands language and behaves as if he understands language does in fact understand language. If not, what more could you possibly require of him? You seem to understand me and (though I can't know another person's thoughts for sure) I take your word that you honestly believe you understand me, but this is exactly what would happen if you had been through Cram's operation as well; so it's possible that the ham sandwich you had for lunch yesterday destroyed the NCC in your language centre, and you just haven't noticed. The only other possibility if p-neurons are ~c-neurons is that Cram does in fact realise that he has no more understanding after the surgery than he did before, but can't do anything about it. He attempts to lash out and smash things in frustration but his body won't obey him, and he observes himself making meaningless noises which the treating team apparently understand to be some sort of thank-you speech. I believe that this is what Searle has said would happen, though it is some time since I came across the paper and I can't now find it. It would mean that Cram would be doing his thinking with something other than his brain, which is forced to behave as if everything was fine. So if p-neurons are ~c-neurons this leads to either partial zombies or extra-brain thought. There's no other way around it. Both possibilities are pretty weird, but I would say that the partial zombies offend logic while the extra-brain thought offends science. Do you still claim that the idea of a computer having a mind is more absurd than either of these two absurdities? -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 4 12:05:41 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jan 2010 23:05:41 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <201790.46747.qm@web36504.mail.mud.yahoo.com> References: <201790.46747.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/4 Gordon Swobe : >> I actually believe that semantics can *only* come from >> syntax, but if it can't, your fallback is that semantics >> comes from the physical activity inside brains. > > Something along those lines, yes. But we can't paste form onto substance and expect intrinsic intentionality, and that's all formal programs do to hardware substance. We might just as well write a letter and expect the letter to understand the words. Still, you have agreed that while programming is not sufficient for intelligence, it cannot prevent intelligence. So although you may have a strong hunch that p-neurons aren't c-neurons, you can't claim this with the force of logical necessity. And that's what would be required in order to justify the weirdness that the partial brain replacement experiment I have been describing would entail. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 4 13:13:14 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 00:13:14 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> Message-ID: 2010/1/4 Stefano Vaj : > 2010/1/3 Stathis Papaioannou : >> I think the argument from partial brain replacement that I have put >> forward to Gordon shows that if you can reproduce the behaviour of the >> brain, then you necessarily also reproduce the consciousness. >> Simulating neurons and molecules is just a means to this end. > > "Consciousness" being hard to define as else than a social construct > and a projection (and a pretty vague one, for that matter, inasmuch as > it should be extensible to fruitflies...), the real point of the > exercise is simply to emulate "organic-like" computational abilities > with acceptable performances, brain-like architectures being > demonstrably not too bad at the task. I can't define or even describe the taste of salt, but I know what I have to do in order to generate it, and I can tell you whether an unknown substance tastes salty or not. That's what I want to know about consciousness in general: I can't define or describe it, but I know it when I have it, and I would like to know if I would still have it after undergoing procedures such as brain replacement. > I do not really see anything that suggests that we could not do > everything in software with a PC, a Chinese Room or a cellular > automaton, without emulating *absolutely anything* of the actual > working of brains... There's no more reason why an AI should emulate a brain than there is why a submarine should emulate a fish. However, if you have had a stroke and need the damaged part of your brain replaced, then it would be important to simulate the workings of your brain as closely as possible. It is not clear at present down to what level the simulation needs to be. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Jan 4 14:01:39 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 4 Jan 2010 06:01:39 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <330161.3464.qm@web36506.mail.mud.yahoo.com> --- On Mon, 1/4/10, Stathis Papaioannou wrote: > It seems to me you must accept some type of epiphenomenalism if you say > that Cram can pass the TT while having different experiences to > Sam. I don't see how that follows, nor do I posit any guess as to their actual experiences (especially for Cram, who may have none by the time the doctors finish with him). I see this experiment in the medical context that you framed. As would happen in a real world hospital setting, the neurosurgeons work on these poor fellows with the alphabet soup neurons until they pass the TT and report normal subjective experiences. Cram's doctors have an extra luxury in that they can program the p-neurons to correct any lingering symptoms. If Sam's m-neurons fail, too bad for Sam. At no time can we really know what goes on in the patients' experiences, except that they report it to their doctors. > This also makes it impossible to ever study the NCC scientifically. > This experiment would be the ideal test for it: the p-neurons function > like c-neurons but without the NCC, yet Cram behaves the same as Sam. We needn't create artificial neurons to study the NCC. We need to identify possible target areas and then to test our theories with technology that switches it off and on in a live patient. Most likely it involves a large swath of neurons that need simultaneously to have the correct synaptic activity and (I would guess) electrical coherence or patterns of some kind. (My guess about the electrical activity helps explain why I reject your beer-cans-and-toilet-paper model of the brain.) >> If Sam passes the TT and reports normal subjective > experiences from m-neurons then I will consider him cured. I > have no concerns about "type" of consciousness. > > As you agreed in a later post, only some m-neurons are > c-neurons. It could be that an internal change in a m-neuron could > turn it from a c-neuron to a ~c-neuron. But it seems you are saying there > is no in between state: it is either a c-neuron or a ~c-neuron. I consider them not much different from b-neurons, and just as in b-neurons I would not rule out the possibility of dysfunctional but still operational ones. gotta run, more later... feel free to respond... -gts From stathisp at gmail.com Mon Jan 4 15:10:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 02:10:08 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <330161.3464.qm@web36506.mail.mud.yahoo.com> References: <330161.3464.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/5 Gordon Swobe : > --- On Mon, 1/4/10, Stathis Papaioannou wrote: > >> It seems to me you must accept some type of epiphenomenalism if you say >> that Cram can pass the TT while having different experiences to >> Sam. > > I don't see how that follows, nor do I posit any guess as to their actual experiences (especially for Cram, who may have none by the time the doctors finish with him). You alter Cram's consciousness, but it has no effect on his behaviour. This is the case if you systematically go about replacing neuron after neuron, until his whole brain is gone, and along with it his conscious. Therefore, consciousness has no effect on behaviour, at least in this case. >> This also makes it impossible to ever study the NCC scientifically. >> This experiment would be the ideal test for it: the p-neurons function >> like c-neurons but without the NCC, yet Cram behaves the same as Sam. > > We needn't create artificial neurons to study the NCC. We need to identify possible target areas and then to test our theories with technology that switches it off and on in a live patient. Most likely it involves a large swath of neurons that need simultaneously to have the correct synaptic activity and (I would guess) electrical coherence or patterns of some kind. (My guess about the electrical activity helps explain why I reject your beer-cans-and-toilet-paper model of the brain.) But how would we ever distinguish the NCC from something else that just had an effect on general neural function? If hypoxia causes loss of consciousness, that doesn't mean that the NCC is oxygen. -- Stathis Papaioannou From jonkc at bellsouth.net Mon Jan 4 16:49:44 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 11:49:44 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <392806.41976.qm@web36507.mail.mud.yahoo.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> Message-ID: On Jan 3, 2010, at 2:42 PM, Gordon Swobe wrote: >> The operative word in the above is "evolved". Why did this mysterious >> "subjective symbol grounding" (bafflegab translation: consciousness) >> evolve? > > To help you communicate better with other monkeys, among other things. So consciousness effects behavior and say goodbye to the Chinese room. > I think you really want to ask how it happened that humans did not evolve as unconscious zombies. Why did evolution select consciousness? I think one good answer is that perhaps nature finds it cheaper when its creatures have first-person awareness of the things they do and say. So it's easier to make a conscious intelligence than an unconscious one. > > We would probably find it more efficient in computers also. So if you ever run across a intelligent computer you can be certain its conscious. Or at least as certain as you are about your fellow human beings are conscious when they act intelligently. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 4 17:10:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 12:10:55 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <822376.84128.qm@web36501.mail.mud.yahoo.com> References: <822376.84128.qm@web36501.mail.mud.yahoo.com> Message-ID: <583AB203-5D73-4932-8930-9A1B93F39D25@bellsouth.net> On Jan 3, 2010, at 4:07 PM, Gordon Swobe wrote: > amoebas appear to have intelligence but most people including me would find themselves hard-pressed to say they have what I mean by consciousness. If you are willing to accept the fantastic premise that a amoeba is intelligent I don't understand why you wouldn't also accept the far more modest proposition that it is conscious. According to Evolution consciousness is easy but intelligence is hard; it took far longer to evolve one than the other. The parts of our brain responsible for the most intense emotions like pain fear anger and even love are many hundreds of millions of years old, but the parts responsible for higher intelligence of which we are so proud and which make our species unique are only about one million years old, perhaps less perhaps much less. > I think Darwin and Searle would enjoy each other's company Searle would enjoy talking with Darwin but I doubt the feeling would be reciprocated. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 4 18:18:18 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jan 2010 12:18:18 -0600 Subject: [ExI] effect/affect again In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com> Message-ID: <4B4230EA.3010605@satx.rr.com> On 1/4/2010 10:49 AM, John Clark wrote: > So consciousness effects behavior I know you get some weird pleasure out of butchering the language with this word, John, but I don't think *anyone* would make the universal claim that consciousness effects behavior. The majority of behavior is effected--caused to occur--by reflex, habit, and other non-conscious control systems (driving automatically while thinking of something else, hitting a ball when playing tennis, etc etc). Presumably you meant to write "affects behavior" which is obviously true--consciousness has *some* influence on behavior, but not all. The problem with playing games with accepted usage is that you can end up saying something stupid that you don't mean. Damien Broderick From jameschoate at austin.rr.com Mon Jan 4 18:33:58 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Mon, 4 Jan 2010 18:33:58 +0000 Subject: [ExI] effect/affect again In-Reply-To: <4B4230EA.3010605@satx.rr.com> Message-ID: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> It's worth mentioning that affect is a verb and effect is (usually) a noun. Affect is about cause, effect is about the thing being affected. ---- Damien Broderick wrote: > On 1/4/2010 10:49 AM, John Clark wrote: > > > So consciousness effects behavior > > I know you get some weird pleasure out of butchering the language with > this word, John, but I don't think *anyone* would make the universal > claim that consciousness effects behavior. The majority of behavior is > effected--caused to occur--by reflex, habit, and other non-conscious > control systems (driving automatically while thinking of something else, > hitting a ball when playing tennis, etc etc). Presumably you meant to > write "affects behavior" which is obviously true--consciousness has > *some* influence on behavior, but not all. > > The problem with playing games with accepted usage is that you can end > up saying something stupid that you don't mean. > -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From thespike at satx.rr.com Mon Jan 4 18:43:12 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jan 2010 12:43:12 -0600 Subject: [ExI] effect/affect again In-Reply-To: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> Message-ID: <4B4236C0.4040307@satx.rr.com> On 1/4/2010 12:33 PM, jameschoate at austin.rr.com wrote: > It's worth mentioning that affect is a verb and effect is (usually) a noun. > > Affect is about cause, effect is about the thing being affected. No, in the way John was using "effect" it was a verb, just the wrong verb. From pharos at gmail.com Mon Jan 4 18:57:42 2010 From: pharos at gmail.com (BillK) Date: Mon, 4 Jan 2010 18:57:42 +0000 Subject: [ExI] effect/affect again In-Reply-To: <4B4236C0.4040307@satx.rr.com> References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> <4B4236C0.4040307@satx.rr.com> Message-ID: On 1/4/10, Damien Broderick wrote: > No, in the way John was using "effect" it was a verb, just the wrong verb. > > In effect, it is just an affectation. BillK From sparge at gmail.com Mon Jan 4 19:16:25 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 4 Jan 2010 14:16:25 -0500 Subject: [ExI] effect/affect again In-Reply-To: References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> <4B4236C0.4040307@satx.rr.com> Message-ID: http://theoatmeal.com/comics/misspelling From gts_2000 at yahoo.com Mon Jan 4 20:09:50 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 4 Jan 2010 12:09:50 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <582583.74076.qm@web36504.mail.mud.yahoo.com> --- On Mon, 1/4/10, Stathis Papaioannou wrote: >> I don't see how that follows, nor do I posit any guess >> as to their actual experiences (especially for Cram, who may >> have none by the time the doctors finish with him). > > You alter Cram's consciousness, but it has no effect on his > behaviour. Yes, the initial operation most almost certainly affect his behavior before he leaves the hospital, causing him do and say strange things. His surgeon corrects those symptoms and side-effects with more programming/replacements with p-neurons until he can call his patient cured. But unbeknownst to the surgeon, who according to your experimental set up has no understanding of philosophy, the cured patient has no subjective experience. When I say I reject epiphenonemalism, I mean that I reject it as an explanation of normal human consciousness. Because I think consciousness plays a role in human behavior, I think Sam will fail his TT unless his m-neurons give him real consciousness. And unlike Cram's doctors, Sam's have no way to correct any side-effects if the m-neurons don't work as advertised. >> We needn't create artificial neurons to study the NCC. >> We need to identify possible target areas and then to test >> our theories with technology that switches it off and on in >> a live patient. > But how would we ever distinguish the NCC from something > else that just had an effect on general neural function? > If hypoxia causes loss of consciousness, that doesn't mean that > the NCC is oxygen. We know ahead of time that the presence of oxygen will play a critical role. Let us say we think neurons in brain region A play the key role in consciousness. If we do not shut off the supply of oxygen but instead shut off the supply of XYZ to region A, and the patient loses consciousness, we then have reason to say that oxygen, XYZ and the neurons in region A play important roles in consciousness. We then test many similar hypotheses with many similar experiments until we have a complete working hypothesis to explain the NCC. At the end of our research project we should have a reasonable theory that explains why George Foreman fell to the mat and could not get up after Muhammad Ali clobbered him in the 8th round in the Rumble in the Jungle. That happened over 30 years ago, and still nobody knows. -gts From spike66 at att.net Mon Jan 4 21:04:10 2010 From: spike66 at att.net (spike) Date: Mon, 4 Jan 2010 13:04:10 -0800 Subject: [ExI] effect/affect again In-Reply-To: References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02><4B4236C0.4040307@satx.rr.com> Message-ID: <9AB494A79BAC4691AC83EC32CCDF5D0D@spike> > Subject: Re: [ExI] effect/affect again > > On 1/4/10, Damien Broderick wrote: > > No, in the way John was using "effect" it was a verb, just > the wrong verb. > > > > In effect, it is just an affectation. BillK Fortunately, Damien is an affable character, even if at times ineffable. {8^D spike From jonkc at bellsouth.net Mon Jan 4 21:39:14 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 16:39:14 -0500 Subject: [ExI] effect/affect again In-Reply-To: <9AB494A79BAC4691AC83EC32CCDF5D0D@spike> References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02><4B4236C0.4040307@satx.rr.com> <9AB494A79BAC4691AC83EC32CCDF5D0D@spike> Message-ID: <014A7D44-C323-482C-BF40-3537A46F37BB@bellsouth.net> On Jan 4, 2010, spike wrote: > Fortunately, Damien is an affable character, even if at times ineffable. And redoubtable too when he wasn't being inscrutable. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Jan 4 21:41:53 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 4 Jan 2010 13:41:53 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <523651.30036.qm@web36508.mail.mud.yahoo.com> --- On Mon, 1/4/10, Stathis Papaioannou wrote: > Moreover, you seem to be saying that there is only one type of c-neuron > that could fill the shoes of the original b-neuron, although > presumably there are different m-neurons that could give rise to this > c-neuron. Is that right? 1. I think b-neurons work as c-neurons in the relevant parts of the brain. 2. I think all p-neurons work as ~c-neurons in the relevant parts of the brain. 3. I annoy Searle, but do not I think fully disclaim his philosophy, by hypothesizing that some possible m-neurons work like c-neurons. Does that answer your question? > Suppose the m-neuron (which is a c-neuron) contains a > mechanism to open and close sodium channels depending on the > transmembrane potential difference. Would changing from an analogue > circuit to a digital circuit for just this mechanism change the neuron > from a c-neuron to a ~c-neuron? Philosophically, yes. In practical sense? Probably not in any detectable way. But you've headed down a slippery slope that ends with describing real natural brains as digital computers. I think you want to go there, (and speaking as an extropian I certainly don't blame you for wanting to) and if so then perhaps we should just cut to the chase and go there to see if the idea actually works. >> No, he does not "actually" believe anything. He merely > reports that he feels normal and reports that he > understands. His surgeon programmed all p-neurons such that > he would pass the TT and report healthy intentionality, > including but not limited to p-neurons in Wernicke's area. > > This is why the experiment considers *partial* replacement. > Even before the operation Cram is not a zombie: despite not > understanding language he can see, hear, feel, recognise people and > objects, understand that he is sick in hospital with a stroke, and > he certainly knows that he is conscious. After the operation he has the > same feelings, but in addition he is pleased to find that he > now understands what people say to him, just as he remembers > before the stroke. I think that after the initial operation he becomes a complete basket-case requiring remedial surgery, and that in the end he becomes a philosophical zombie or something very close to one. If his surgeon has experience then he becomes a zombie or near zombie on day one. -gts From jonkc at bellsouth.net Mon Jan 4 21:27:14 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 16:27:14 -0500 Subject: [ExI] effect/affect again. In-Reply-To: <4B4230EA.3010605@satx.rr.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> Message-ID: <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> On Jan 4, 2010, Damien Broderick wrote: >> So consciousness effects behavior > > I know you get some weird pleasure out of butchering the language with this word, John, I don't believe I've ever used the word "affect" in my life, I have mentioned it a few times in the past but only when somebody (it may even have been you) accused me of using "effect" too much; but I've always had a fondness for cause and effect and I figure if effect is good enough for cause it's good enough for me. And besides, this entire debate is between those who think the human will is fundamentally different from other kinds of events and those like me who disagree. I think "affectation" still has a place in the English language but "affect" should die and join other extinct words in that great dictionary in the sky, words like "methinks","cozen","fardel", "huggermugger", "zounds" and "typewriter". > but I don't think *anyone* would make the universal claim that consciousness effects behavior. I'm someone and I think consciousness effects behavior, if it didn't we wouldn't have it; at least that's what I think and Darwin agrees with me. As I said before, saying I scratched my nose because I wanted to is a perfectly valid thing to say, as is saying that the balloon expanded because the pressure inside it increased; I do however insist that there is more than one way to correctly describe both of those events. > Presumably you meant to write "affects behavior" You presume incorrectly, I meant to say "effects behavior" and that is exactly what I said. > consciousness has *some* influence on behavior, but not all. If A effects B there is no reason C,D,E and F couldn't effect B too. In fact logically it could be that nothing effects B at all but B changes anyway. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Jan 4 22:07:48 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 4 Jan 2010 14:07:48 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <583AB203-5D73-4932-8930-9A1B93F39D25@bellsouth.net> Message-ID: <579515.50389.qm@web36501.mail.mud.yahoo.com> --- On Mon, 1/4/10, John Clark wrote: > If you are willing to accept the fantastic premise that a amoeba is > intelligent Why do you consider it such a fantastic premise? Amoebas and other such organisms can find food and so on. Sure looks like intelligence to me. Too bad those unconscious critters can't know that I hold them in such high esteem. -gts From pharos at gmail.com Mon Jan 4 22:17:29 2010 From: pharos at gmail.com (BillK) Date: Mon, 4 Jan 2010 22:17:29 +0000 Subject: [ExI] effect/affect again. In-Reply-To: <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: On 1/4/10, John Clark wrote: > You presume incorrectly, I meant to say "effects behavior" and that is > exactly what I said. > > If A effects B there is no reason C,D,E and F couldn't effect B too. In fact > logically it could be that nothing effects B at all but B changes anyway. > > Damien, I don't think your protestations are going to affect John. His behavior remains unaffected by your ineffectual protests. He is effectively immune. BillK From thespike at satx.rr.com Mon Jan 4 22:27:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jan 2010 16:27:57 -0600 Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: <4B426B6D.4080704@satx.rr.com> On 1/4/2010 4:17 PM, BillK wrote: > I don't think your protestations are going to affect John. > He is effectively immune. Yes, as you suggested previously, it's an affectation. Better than word-blindness, I suppose, but mot wery afficient for conveying ontended meaming. From jonkc at bellsouth.net Mon Jan 4 22:55:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 17:55:32 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <579515.50389.qm@web36501.mail.mud.yahoo.com> References: <579515.50389.qm@web36501.mail.mud.yahoo.com> Message-ID: On Jan 4, 2010, Gordon Swobe wrote: > Why do you consider it such a fantastic premise? Amoebas and other such organisms can find food and so on. Sure looks like intelligence to me. Well I admit it does a little bit seem like intelligence to me too, but only a little bit, if a computer had done the exact same thing rather than a amoeba you would be screaming that it has nothing to do with intelligence it's just programing. But never mind, on a scale of zero to 100,000,000,000 on the intelligence scale with me being 80 and Searle being 49 I'd put amoebas at .000000000000000001 on that same intelligence scale. I'd put that same amoeba at .01 on the consciousness scale. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Mon Jan 4 22:57:25 2010 From: aware at awareresearch.com (Aware) Date: Mon, 4 Jan 2010 14:57:25 -0800 Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: On Mon, Jan 4, 2010 at 2:17 PM, BillK wrote: > I don't think your protestations are going to affect John. There once was a man named John Clark, whose bite was as good as his bark. When you said "the effect" he heard "defect" which proceeded to ignite a new spark... which will have little effect on the endless Gordian knot composed of these threads. One bold stroke of insight is all that is required to escape the Gordian knot (Gordon, Guardian, not?) but while the discussion has had some effect on observers' observed affect, it has yet to affect observations on the recursive relationship of the observer to the observed. - Jef From aware at awareresearch.com Tue Jan 5 00:00:05 2010 From: aware at awareresearch.com (Aware) Date: Mon, 4 Jan 2010 16:00:05 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <501890.28524.qm@web113619.mail.gq1.yahoo.com> References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Message-ID: On Sun, Jan 3, 2010 at 7:36 AM, Ben Zaiboc wrote: > I'm not at all convinced by these personality tests. ?Every time I've tried a Myers-Briggs test (being just as vain as everyone else), I've got a different result. I've taken 4 over 20+ years. Three sponsored by business/management seminars and one as part of a college course that I took with one of our kids. Mine have always indicated INTJ, with most of the scores moving closer to center. > So far, I'm INTP, INFJ, and INFP, so rather than an xNTx, I seem to be a INxx. Does that mean anything? That you're not good at taking tests? > Also, the summaries remind me more of a horoscope than anything. ?Why do I never read anything bad about myself? ?That's suspicious, I'm not so vain as to think I don't have bad points. Well, quite a lot of statistical effort was applied to MBTI and businesses find utility in them for training people how to recognize differences and find ways to relate to other temperaments. But as Emlyn points out, the Big 5, system has superseded MBTI for academic work. You should note, however, that the descriptions do not say anything "bad" about any of the types, because it's incoherent to say there is anything intrinsically bad about the true nature of anything. That said, they point out plenty of propensities. Compare ISTJ (Gordon, presumably, but with high certainty) with INTJ (Jef), per the Wikipedia descriptions: ISTJ , ------- "ISTJs are faithful, logical, organized, sensible, and earnest traditionalists." "...prefer concrete and useful applications and will tolerate theory only if it leads to these ends." "Material that seems too easy or too enjoyable leads ISTJs to be skeptical of its merit." "...they resist putting energy into things that don't make sense to them..." "They have little use for theory or abstract thinking, unless the practical application is clear." INTJ , ------- "INTJs apply (often ruthlessly) the criterion "Does it work?" to everything from their own research efforts to the prevailing social norms." "...an unusual independence of mind, freeing the INTJ from the constraints of authority, convention, or sentiment for its own sake..." "...known as the "Systems Builders" of the types, perhaps in part because they possess the unusual trait combination of imagination and reliability." "...seek new angles or novel ways of looking at things. They enjoy coming to new understandings...." "They harbor an innate desire to express themselves by conceptualizing their own intellectual designs." Can you see from the above why I might view Gordon (and Lee) as puzzles, while they might see me as an unfathomable irritant? Would members of this list have any trouble deciding between Max and Natasha which is the likely INTJ and which is the likely ENFP? - Jef From spike66 at att.net Tue Jan 5 00:01:21 2010 From: spike66 at att.net (spike) Date: Mon, 4 Jan 2010 16:01:21 -0800 Subject: [ExI] effect/affect again. In-Reply-To: <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> References: <392806.41976.qm@web36507.mail.mud.yahoo.com><4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: On Behalf Of John Clark ...I think "affectation" still has a place in the English language but "affect" should die and join other extinct words in that great dictionary in the sky, words like "methinks","cozen","fardel", "huggermugger", "zounds" and "typewriter"...John K Clark Methinks otherwise John. I like the word methinks, and use it occasionally. I actually agree that the words affect and effect are a flaw in the language. Words that almost rhyme should have very different meanings and usages; those two invite conflation methinks. I also want to keep fardel. I didn't know what that was until I looked it up. Now I shall try to use it. I propose a game of balderdash. Perhaps you know of it: the players are given obscure English words, and they make up definitions, and try to fool the other players into choosing their definitions over the others, or the real one. I will not play on fardel, since I looked it up. But here are my other plays: cozen: the group with which one meditates huggermugger: one who attempts to take money by force from environmentalists zounds: the noise often emitted by sleepers typewriter: you stumped me on that one, never heard of it. Actually I must be honest and disqualify myself from this one, for I am one who not only knows what is a typewriter, but actually used one, in college. I can out-geezer almost everyone here by having used the kind (in college!) which does not plug in to the wall. Perhaps only Damien can match this, methinks. spike From rafal.smigrodzki at gmail.com Tue Jan 5 01:55:27 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 4 Jan 2010 20:55:27 -0500 Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> On Mon, Jan 4, 2010 at 7:01 PM, spike wrote: > > Actually I must be honest and disqualify myself from this one, for I am one > who not only knows what is a typewriter, but actually used one, in college. > I can out-geezer almost everyone here by having used the kind (in college!) > which does not plug in to the wall. ?Perhaps only Damien can match this, > methinks. ### I can match you on that one: I learned to type on a manual typewriter, owned by my father by special dispensation of the United Polish Communist Worker Party in the late 70's. Now beat this: I have helped my mother wring laundry using a hand-crank operated mangle attached to the non-automatic washing machine we had in the early 70's. Rafal From thespike at satx.rr.com Tue Jan 5 02:10:44 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jan 2010 20:10:44 -0600 Subject: [ExI] mangle In-Reply-To: <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> Message-ID: <4B429FA4.2000704@satx.rr.com> On 1/4/2010 7:55 PM, Rafal Smigrodzki wrote: > Now beat this: I have helped my mother wring laundry using a > hand-crank operated mangle attached to the non-automatic washing > machine we had in the early 70's. Ha! I helped my mother haul out washing to the line to dry after she'd *boiled it in the copper* (as it was called) long before we had a washing machine. Damien Broderick From max at maxmore.com Tue Jan 5 02:14:26 2010 From: max at maxmore.com (Max More) Date: Mon, 04 Jan 2010 20:14:26 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: <201001050221.o052LEsG011499@andromeda.ziaspace.com> Jef: >Would members of this list have any trouble deciding between Max and >Natasha which is the likely INTJ and which is the likely ENFP? Nope. I always come out solidly INTP. (Unless I remember wrongly; it's been years since I last took the test; but I'm pretty sure that's right.) Max From max at maxmore.com Tue Jan 5 02:26:33 2010 From: max at maxmore.com (Max More) Date: Mon, 04 Jan 2010 20:26:33 -0600 Subject: [ExI] mangle Message-ID: <201001050226.o052QiLw025830@andromeda.ziaspace.com> Damien wrote: >Ha! I helped my mother haul out washing to the line to dry after she'd >*boiled it in the copper* (as it was called) long before we had a >washing machine. Washing line? You're bloody lucky (pron. "looky")! In *my* day, we didn't have no washin' lines. We 'ad to hold the wet clothes for several hours, standing on one foot, as we blowed on it manually to help the evaporation. And it we didn't go it right, my mam would chop us into pieces and feed us to me dad for dinner. Forgive me. http://www.youtube.com/watch?v=Xe1a1wHxTyo Max From aware at awareresearch.com Tue Jan 5 02:34:40 2010 From: aware at awareresearch.com (Aware) Date: Mon, 4 Jan 2010 18:34:40 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <201001050221.o052LEsG011499@andromeda.ziaspace.com> References: <201001050221.o052LEsG011499@andromeda.ziaspace.com> Message-ID: On Mon, Jan 4, 2010 at 6:14 PM, Max More wrote: >> Would members of this list have any trouble deciding between Max and >> Natasha which is the likely INTJ and which is the likely ENFP? > > Nope. I always come out solidly INTP. (Unless I remember wrongly; it's been > years since I last took the test; but I'm pretty sure that's right.) I honestly wasn't sure about the P/J dimension in your case. Thanks, - Jef From max at maxmore.com Tue Jan 5 02:10:48 2010 From: max at maxmore.com (Max More) Date: Mon, 04 Jan 2010 20:10:48 -0600 Subject: [ExI] effect/affect again. Message-ID: <201001050237.o052bcnD029935@andromeda.ziaspace.com> >Actually I must be honest and disqualify myself from this one, for I >am one who not only knows what is a typewriter, but actually used >one, in college. I can out-geezer almost everyone here by having >used the kind (in college!) which does not plug in to the >wall. Perhaps only Damien can match this, methinks. > >spike My *dear* fellow. I'll have you know that I wrote my undergraduate thesis using a typewriter. (Something on eudaimonic egoism..., around 1986.) I also created three issues of a quite fancy comics fanzine called Starlight in 1979 and 1980 -- with hand-justified columns and some experimental, slanted column layouts, entirely using a typewriter and hand-spacing to achieve justified columns. (You might reply that the effect was a mere affectation, but it was still an effort incomparable to anything post-computer.) Max From spike66 at att.net Tue Jan 5 05:09:44 2010 From: spike66 at att.net (spike) Date: Mon, 4 Jan 2010 21:09:44 -0800 Subject: [ExI] kepler finds a couple of hot objects Message-ID: <95CE8F580E874D159AA4207317F56F10@spike> This is cool: http://www.foxnews.com/scitech/2010/01/04/planet-hunting-telescope-unearths- hot-mysteries-space/?test=latestnews This comment gave me a good harrr har: How hot? Try 26,000 degrees Fahrenheit (14,425 Celsius). That is hot enough to melt lead or iron. Ummm, yes. That would be plenty hot enough to not only melt but boil everything on the chart, and still have plenty of degrees to spare. {8^D spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Tue Jan 5 05:41:10 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 21:41:10 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> Message-ID: <4B42D0F6.8020003@rawbw.com> Stefano and Stathis, respectively, wrote: > "Consciousness" being hard to define as else than a social construct > and a projection (and a pretty vague one, for that matter, inasmuch as > it should be extensible to fruitflies...), the real point of the > exercise is simply to emulate "organic-like" computational abilities > with acceptable performances, brain-like architectures being > demonstrably not too bad at the task. What the key question is, is whether or not you would choose to be uploaded given a preview of the resulting machinery. It's what all these discussions are really all about. As for me, so long as there is a *causal* mechanism (i.e. information flow from state to state, with time being a key element), and it will produce behavior that is within the range of normal behavior for me, then I'm on board. Stathis: > I can't define or even describe the taste of salt, but I know what I > have to do in order to generate it, and I can tell you whether an > unknown substance tastes salty or not. That's what I want to know > about consciousness in general: I can't define or describe it, but I > know it when I have it, and I would like to know if I would still have > it after undergoing procedures such as brain replacement. Yes, that's it. It is logically conceivable, after all, as several on this list maintain, that every time you replace any biologically operating part with a mechanism that, say, does not involve chemical transformations, then your experience is diminished proportionally, with the end result that any non-biological entity actually has none of this consciousness you refer to. While *logically* possible, of course, I consider this possibility very remote. Lee From lcorbin at rawbw.com Tue Jan 5 05:52:44 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 21:52:44 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> Message-ID: <4B42D3AC.6090504@rawbw.com> Jef wrote at 1/2/2010 12:09 PM: > [Lee wrote] > >> Let's suppose for a moment that [the skeptical view] is right. >> In other words, internal mechanisms of the neuron must also be >> simulated. > > Argh,"turtles all the way down", indeed. Then must nature also > compute the infinite expansion of the digits of pi for every soap > bubble as well? Well, as you know, in one sense nature does compute infinite expansions---but not in a very useful sense. It's annoying that nature exactly solves the Schr?dinger differential equation for the helium atom whereas we cannot. >> ...if presented with two simulations >> only one of which is a true emulation, and they're both >> exhibiting behavior indicating extreme pain, we want to >> focus all relief efforts only on the one. We really do >> *not* care a bit about the other. > > This way too leads to contradiction, for example in the case of a > person tortured, then with memory erased, within a black box. I do not see any contradiction here. I definitely do not want that experience whether or not memories are erased, nor, in my opinion, would it be moral for me to sanction it happening to someone else. I consider the addition or deletion of memories per se as not affecting the total benefit over some interval to an entity. Yes, sometimes memory erasure might make certain conditions livable, and certain other memory additions might even produce fond reminisces. > The morality of any act depends not on the **subjective** state of > another, which by definition one could never know, but on our > assessment of the rightness, in principle, of the action, in terms of > our values. Yes, we're always guessing (though with pretty good guesses in my opinion), about what others experience. >> For those of us who are functionalists (or, in my case, almost >> 100% functionalists), it seems almost inconceivable that the causal >> components of an entity's having an experience require anything >> beneath the neuron level. In fact, it's very likely that the >> simulation of whole neuron tracks or bundles suffice. > > Let go of the assumption of an **essential** consciousness, and you'll > see that your functionalist perspective is entirely correct, but it > needs only the level of detail, within context, to evoke the > appropriate responses of the observer. To paraphrase John Clark, > "swiftness" is not in the essence of a car, and the closer one looks > the less apt one is to find it. Furthermore (and I realize that John > didn't say /this/), a car displays "swiftness" only within an > appropriate context. But key is understanding is that this > "swiftness" (separate from formal descriptions of rotational velocity, > power, torque, etc.) is a function of the observer. But this makes it sound, to me, that you're going right back to a "subjective" consideration, namely, this time around, in the mind of an observer. So if A and B are your observers, then whether or not true suffering is occurring to C is a function of A or B? > Happy New Year, Lee. Thanks. Happy New Year to you too! Lee From lcorbin at rawbw.com Tue Jan 5 06:03:01 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 22:03:01 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Message-ID: <4B42D615.1070306@rawbw.com> Jef writes > INTJ , > > ------- > "INTJs apply (often ruthlessly) the criterion "Does it work?" to > everything from their own research efforts to the prevailing social > norms." > "...an unusual independence of mind, freeing the INTJ from the > constraints of authority, convention, or sentiment for its own > sake..." > "...known as the "Systems Builders" of the types, perhaps in part > because they possess the unusual trait combination of imagination and > reliability." > "...seek new angles or novel ways of looking at things. They enjoy > coming to new understandings...." > "They harbor an innate desire to express themselves by conceptualizing > their own intellectual designs." > > Can you see from the above why I might view Gordon (and Lee) as > puzzles, while they might see me as an unfathomable irritant? Once again, don't confuse objectivity and ("to see" or "view") and subjectivity. Objectively, you are absolutely an unfathomable irritant, as you put it, no question about it. > Would members of this list have any trouble deciding between Max and > Natasha which is the likely INTJ and which is the likely ENFP? I don't suppose anyone would :-) Incidentally, many years ago everyone (including me) that I was well-acquainted with was INTP. (But I was almost a J.) Now all my acquaintances are INTJ. Do we get more judgmental, or decisive, or something as we age, or do you suppose that the same kinds of people 25 years ago that were INTP are now INTJ? Maybe we've put a decade or two more between us and the obscurantism of the sixties. Lee From lcorbin at rawbw.com Tue Jan 5 06:09:13 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 22:09:13 -0800 Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com><4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: <4B42D789.2070707@rawbw.com> spike wrote: > typewriter: you stumped me on that one, never heard of it. > > Actually I must be honest and disqualify myself from this one, for I am one > who not only knows what is a typewriter, but actually used one, in college. > I can out-geezer almost everyone here by having used the kind (in college!) > which does not plug in to the wall. Perhaps only Damien can match this, > methinks. I actually owned one of the devices! Paying $200 or so for it, back when that was real money. Perhaps we should, following a hint from Rafal, just use "uffect" in place of those two words people hopelessly confuse (mostly either because they're too lazy or, like John Clark, suffer from congenital stubbornness). But I'm sure that all the traditionalists like Damien would just have a cow at this neologism. Lee From lcorbin at rawbw.com Tue Jan 5 06:15:22 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 22:15:22 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <201001050221.o052LEsG011499@andromeda.ziaspace.com> References: <201001050221.o052LEsG011499@andromeda.ziaspace.com> Message-ID: <4B42D8FA.4010108@rawbw.com> Max More wrote: > I always come out solidly INTP. (Unless I remember wrongly; it's > been years since I last took the test; but I'm pretty sure that's right.) I bet if you take it again, you'll now come out INTJ. That's what happened to me. Lee From rafal.smigrodzki at gmail.com Tue Jan 5 06:41:43 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 5 Jan 2010 01:41:43 -0500 Subject: [ExI] mangle In-Reply-To: <4B429FA4.2000704@satx.rr.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> <4B429FA4.2000704@satx.rr.com> Message-ID: <7641ddc61001042241tb648304mf42fa6778422d651@mail.gmail.com> On Mon, Jan 4, 2010 at 9:10 PM, Damien Broderick wrote: > On 1/4/2010 7:55 PM, Rafal Smigrodzki wrote: > >> Now beat this: I have helped my mother wring laundry using a >> hand-crank operated mangle attached to the non-automatic washing >> machine we had in the early 70's. > > Ha! I helped my mother haul out washing to the line to dry after she'd > *boiled it in the copper* (as it was called) long before we had a washing > machine. ### OK, so match me that: I was scurrying around (being a toddler in the late 60's) as my grammy was stomping the cabbage - mixing sauerkraut in a large tub by stomping it with bare feet, after the old Silesian sauerkraut foot-based seasoning fashion. Rafal From stathisp at gmail.com Tue Jan 5 07:15:50 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 18:15:50 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <4B42D0F6.8020003@rawbw.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> <4B42D0F6.8020003@rawbw.com> Message-ID: 2010/1/5 Lee Corbin : > Yes, that's it. It is logically conceivable, after all, as > several on this list maintain, that every time you replace > any biologically operating part with a mechanism that, say, > does not involve chemical transformations, then your > experience is diminished proportionally, with the end > result that any non-biological entity actually has none > of this consciousness you refer to. While *logically* > possible, of course, I consider this possibility very > remote. If your language centre were zombified, you would be able to participate normally in a conversation and you would honestly believe that you understood everything that was said to you, but in fact you would understand nothing. It's possible that you have a zombified language centre right now, a side-effect of the sandwich you had for lunch yesterday. You wouldn't know it, and even if it were somehow revealed to you, there wouldn't be any good reason to avoid those sandwiches in future. If you think that such a distinction between true experience and zombie experience is incoherent, then arguably it is not even logically possible for artificial neurons to be functionally identical to normal neurons but lack the requirements for consciousness. -- Stathis Papaioannou From max at maxmore.com Tue Jan 5 07:44:11 2010 From: max at maxmore.com (Max More) Date: Tue, 05 Jan 2010 01:44:11 -0600 Subject: [ExI] kepler finds a couple of hot objects Message-ID: <201001050744.o057iI0S021981@andromeda.ziaspace.com> spike posted: > >For now, NASA researcher Jason Rowe, who found the objects, said > he calls them "hot companions." I'm going to bed now, before I get too excited. Thanks for posting that, spike. Interesting. Max From max at maxmore.com Tue Jan 5 07:55:04 2010 From: max at maxmore.com (Max More) Date: Tue, 05 Jan 2010 01:55:04 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: <201001050755.o057tCga000596@andromeda.ziaspace.com> Lee Corbin wrote: >Max More wrote: > > > I always come out solidly INTP. (Unless I remember wrongly; it's > > been years since I last took the test; but I'm pretty sure that's right.) > >I bet if you take it again, you'll now come out INTJ. >That's what happened to me. Well, I was wondering about that, Lee. I wouldn't mind taking the test again to see if my type has shifted. I think I may well be more J-ish, but the test should give a better indication. I can take the test in my copy of David Keirsey's book, "Please Understand Me II: Temperament, Character, Intelligence", but do you know of a better (more thorough and/or updated version) online? Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From stathisp at gmail.com Tue Jan 5 09:23:44 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 20:23:44 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <523651.30036.qm@web36508.mail.mud.yahoo.com> References: <523651.30036.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/5 Gordon Swobe : > --- On Mon, 1/4/10, Stathis Papaioannou wrote: > >> Moreover, you seem to be saying that there is only one type of c-neuron >> that could fill the shoes of the original b-neuron, although >> presumably there are different m-neurons that could give rise to this >> c-neuron. Is that right? > > 1. I think b-neurons work as c-neurons in the relevant parts of the brain. > > 2. I think all p-neurons work as ~c-neurons in the relevant parts of the brain. > > 3. I annoy Searle, but do not I think fully disclaim his philosophy, by hypothesizing that some possible m-neurons work like c-neurons. > > Does that answer your question? Is there only one type of c-neuron or is it possible to insert m-neurons which, though they are functionally identical to b-neurons, result in a different kind of consciousness? >> Suppose the m-neuron (which is a c-neuron) contains a >> mechanism to open and close sodium channels depending on the >> transmembrane potential difference. Would changing from an analogue >> circuit to a digital circuit for just this mechanism change the neuron >> from a c-neuron to a ~c-neuron? > > Philosophically, yes. In practical sense? Probably not in any detectable way. But you've headed down a slippery slope that ends with describing real natural brains as digital computers. I think you want to go there, (and speaking as an extropian I certainly don't blame you for wanting to) and if so then perhaps we should just cut to the chase and go there to see if the idea actually works. Philosophy has to give an answer that's in accordance with what would actually happen, what you would actually experience, otherwise it's worse than useless. The discussion we have been having is an example of a philosophical problem with profound practical consequences. If I get a new super-fast computerised brain and you're right I would be killing myself, whereas if I'm right I would become an immortal super-human. I think it's important to be sure of the answer before going ahead! You shouldn't dismiss the slippery slope argument so quickly. Either you suddenly become a zombie when a certain proportion of your neurons internal workings are computerised or you don't. If you don't, then the option is that you don't become zombified at all or that you become zombified in proportion to how much of the neurons is computerised. Either sudden or gradual zombification seems implausible to me. The only plausible alternative is that you don't become zombified at all. >>> No, he does not "actually" believe anything. He merely >> reports that he feels normal and reports that he >> understands. His surgeon programmed all p-neurons such that >> he would pass the TT and report healthy intentionality, >> including but not limited to p-neurons in Wernicke's area. >> >> This is why the experiment considers *partial* replacement. >> Even before the operation Cram is not a zombie: despite not >> understanding language he can see, hear, feel, recognise people and >> objects, understand that he is sick in hospital with a stroke, and >> he certainly knows that he is conscious. After the operation he has the >> same feelings, but in addition he is pleased to find that he >> now understands what people say to him, just as he remembers >> before the stroke. > > I think that after the initial operation he becomes a complete basket-case requiring remedial surgery, and that in the end he becomes a philosophical zombie or something very close to one. If his surgeon has experience then he becomes a zombie or near zombie on day one. I don't understand why you say this. Perhaps I haven't explained what I meant well. The p-neurons are drop-in replacements for the b-neurons, just like pulling out the LM741 op amps in a piece of audio equipment and replacing them with TL071's. The TL071 performs the same function as the 741 and has the same pin-out, so the equipment will function just the same, even though the internal circuitry of the two IC's is quite different. You need know nothing at all about the insides of op amps to use them or find replacements for them in a circuit: as long as the I/O behaviour is the same, they one could be driven by vacuum tubes and the other by little demons and the circuit would work just fine in both cases. It's the same with the p-neurons. The manufacturer guarantees that the I/O behaviour of a p-neuron is identical to that of the b-neuron that it replaces, but that's all that is guaranteed: the manufacturer neither knows nor cares about consciousness, understanding or intentionality. Now, isn't it clear from this that Cram must behave normally and must (at least) have normal experiences in the parts of his brain which aren't replaced, given that he wasn't a zombie before the operation? If Cram has neurons in his language centre replaced then he must be able to communicate normally and respond to verbal input normally in every other way: draw a picture, laugh with genuine amusement at a joke, engage in philosophical debate. He must also genuinely believe that he understands everything, since if he didn't he would tell us. So you are put in a position where you have to maintain that Cram behaves as if he has understanding and genuinely believes that he has understanding, while in fact he doesn't understand anything. Is this position coherent? -- Stathis Papaioannou From pharos at gmail.com Tue Jan 5 09:59:01 2010 From: pharos at gmail.com (BillK) Date: Tue, 5 Jan 2010 09:59:01 +0000 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <201001050755.o057tCga000596@andromeda.ziaspace.com> References: <201001050755.o057tCga000596@andromeda.ziaspace.com> Message-ID: On 1/5/10, Max More wrote: > Lee Corbin wrote: > > I bet if you take it again, you'll now come out INTJ. > > That's what happened to me. > > > > I wouldn't mind taking the test again to see if my type has shifted. I > think I may well be more J-ish, but the test should give a better > indication. I can take the test in my copy of David Keirsey's book, "Please > Understand Me II: Temperament, Character, Intelligence", but do you know of > a better (more thorough and/or updated version) online? > > As Ben mentioned, these personality types come out sounding like astrology descriptions. (Don't say anything bad in case you drive the paying customers away). The fact that dating and match-making sites use them is another minus point. Perhaps Lee is now INTJ because he's turned into a boring old fart? :) How about these descriptions? INTJ People hate you. I mean, you're pretty damn clever and you know it. You love to flaunt your potential. Heard the word "arrogant" lately? How about "jerk?" Or perhaps they only say that behind your back. That's right. I know I can say this cause you're not going to cry. You're not exactly the most emotional person. You'd rather spend time with your theoretical questions and abstract theories than with other people. Ever been kissed? Ever even been on a date? Trust me, your inflated ego is a complete turnoff with the opposite sex and I am telling you, you're not that great with relationships as it is. You're never going to be a dude or chick magnet, purely because you're more concerned with yourself than others. Meh. They all hate you already anyway. How about this- "stubborn?" Hrm? Heard that lately? All those facts which don't fit your theories must just be wrong, right? I mean, really, the vast amounts of time you spend with your head in the clouds...you're just plain strange. -------------------------------------- INTP Talked to another human being lately? I'm serious. You value knowledge above ALL else. You love new ideas, and become very excited over abstractions and theories. The fact that nobody else cares still hasn't become apparent to you... Nerd's a great word to describe you, and I seriously couldn't care less about the different definitions of the word and why you're actually more of a geek than a nerd. Don't pretend you weren't thinking that. You want every single miniscule fact and theory to be presented correctly. Critical? Sarcastic? Cynical? Pessimistic? Just a few words to describe you when you're at your very best...*cough* Sorry, I mean worst. Picking up the dudes or dudettes isn't something you find easy, but don't worry too much about it. You can blame it on your personality type now. On top of all this, you're shy. Nice one. Now, quickly go and delete everything about "theoretical questions" from your profile page. As long as nobody tries to start a conversation with you, just MAYBE you'll now have a chance of picking up a date. But don't get your hopes up. ----------------------------------- ISTJ One word. Boring. Sums you up to a tee. You're responsible, trustworthy, serious and down to earth. Boring. Boring. Boring. You play by the rules. You follow tradition. You encourage structure. You insist that EVERYBODY do EVERYTHING by the book. Seriously, is there even an ounce of imagination in that little brain of yours? I mean, what's the point of imagination, right? It has no practical value... As far as you're concerned, abstract theories can go screw themselves. You just want the facts, all the facts and nothing but the facts. Oh. And you're a perfectionist. About everything. You know that the previous sentence was gramattically incorrect and that "gramattically" was spelt wrong. Your financial records are correct to 25 decimal places and your bedroom is in pristine condition. In fact, you even don't sleep on your bed anymore for fear that you might crease the sheets. Thankfully, you don't have anyone else to share the bed with, because you're uncomfortable expressing affection and emotion to others. Too bad. ---------------- Do you still like personality tests ??????? The best personality tester is a Labrador dog. He loves you and thinks you are wonderful and ignores all your little defects. What more do you want ??? ;) BillK From dharris at livelib.com Tue Jan 5 10:07:08 2010 From: dharris at livelib.com (David C. Harris) Date: Tue, 05 Jan 2010 02:07:08 -0800 Subject: [ExI] effect/affect again. In-Reply-To: <201001050237.o052bcnD029935@andromeda.ziaspace.com> References: <201001050237.o052bcnD029935@andromeda.ziaspace.com> Message-ID: <4B430F4C.5040104@livelib.com> Max More wrote: > >> Actually I must be honest and disqualify myself from this one, for I >> am one who not only knows what is a typewriter, but actually used >> one, in college. I can out-geezer almost everyone here by having used >> the kind (in college!) which does not plug in to the wall. Perhaps >> only Damien can match this, methinks. >> >> spike > > My *dear* fellow. I'll have you know that I wrote my undergraduate > thesis using a typewriter. (Something on eudaimonic egoism..., around > 1986.) I also created three issues of a quite fancy comics fanzine > called Starlight in 1979 and 1980 -- with hand-justified columns and > some experimental, slanted column layouts, entirely using a typewriter > and hand-spacing to achieve justified columns. (You might reply that > the effect was a mere affectation, but it was still an effort > incomparable to anything post-computer.) > > Max Ahhhh, honored Max, geezerdom is not earned by stupendous effort and skill, which you exhibit, but by being OLD! I think I bought my typewriter (elite size character set composed of UGLY san serif letters) around 1963, used it for a few years, and submitted decks of "IBM cards" to a CDC 6600 time shared mainframe around 1965 at UC Berkeley. Now that equipment is making me smile during visits to the Computer History Museum in Mountain View, CA. I claim less talent and more OLD! ;-) If regenerative medicine doesn't save me from permanent death, I hope someone will reanimate me from Alcor's tanks to be a tour guide at the Museum, where I can regale visitors with stories of using a 029 keypunch to make a deck of computer cards with holes punched to allow notching to allow some cards to drop off when a paper clip was inserted. Sounded great, but I didn't have a logical system for more than a nonexclusive OR. When I later encountered Boolean logic I was one motivated student! Oh, and for Spike, a typewriter is a system that takes single character input from a keyboard and immediately outputs it to a printing device, one character at a time, unbuffered, right? - David Harris, Palo Alto, California. From stefano.vaj at gmail.com Tue Jan 5 11:11:58 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 5 Jan 2010 12:11:58 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <85434.91068.qm@web65601.mail.ac4.yahoo.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> Message-ID: <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> 2009/12/30 The Avantguardian : > Well some hints are more obvious than others. ;-) > http://www.hplusmagazine.com/articles/bio/spooky-world-quantum-biology > http://www.ks.uiuc.edu/Research/quantum_biology/ It is not that I do not know the sources, Penrose in the first place. Car engines are also made of molecules, which are made of atoms, and ultimately are the expression of an underlying quantum reality. What I find unpersuasive is the theory that life, however defined, is anything special amongst high-level chemical reactions. >> But, there again, quantum computing fully remains in the field of >> computability, does it not? And the existence of "organic computers" >> implementing such principles would be proof that such computers can be >> built. In fact, I would suspect that "quantum computation", in a >> Wolframian sense", would be all around us, also in other, non-organic, >> systems. > > I have no proof but I suspect that many biological processes are indeed quantum computations. Quantum tunneling of information backwards through time could, for example, explain life's remarkable ability to anticipate things. It may very well be the case that quantum computation is in a sense pervasive, but again I do not see why life, however defined, would be a special case in this respect, since I do not see organic brains exhibiting quantum computation features any more than, say, PCs, and I suspect that "biological anticipations", etc., are more in the nature of "optical artifacts" like the Intelligent Design of organisms. >> There again, the theoretical issue would be simply that of executing a >> program emulating what we execute ourselves closely enough to qualify >> as "human-like" for arbitrary purposes, and find ways to implement it >> in manner not making us await its responses for multiples of the >> duration of the Universe... ;-) > > In order to do so, it would have to consider a superposition of?every possible response and collapse?the ouput?"wavefunction" on the most appropriate response. *If* organic brains actually do some quantum computing. Now, I still have to see any human being solving a typical quantum computing problem with a pencil and a piece of paper... ;-) -- Stefano Vaj From gts_2000 at yahoo.com Tue Jan 5 11:26:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 5 Jan 2010 03:26:49 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <647312.20523.qm@web36504.mail.mud.yahoo.com> --- On Mon, 1/4/10, John Clark wrote: > Well I admit it does a little bit seem like > intelligence to me too, but only a little bit, if a computer > had done the exact same thing rather than a amoeba you would > be screaming that it has nothing to do with intelligence > it's just programing. No, I consider computers intelligent. -gts From stathisp at gmail.com Tue Jan 5 11:32:31 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 22:32:31 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <582583.74076.qm@web36504.mail.mud.yahoo.com> References: <582583.74076.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/5 Gordon Swobe : >> But how would we ever distinguish the NCC from something >> else that just had an effect on general neural function? >> If hypoxia causes loss of consciousness, that doesn't mean that >> the NCC is oxygen. > > We know ahead of time that the presence of oxygen will play a critical role. > > Let us say we think neurons in brain region A play the key role in consciousness. If we do not shut off the supply of oxygen but instead shut off the supply of XYZ to region A, and the patient loses consciousness, we then have reason to say that oxygen, XYZ and the neurons in region A play important roles in consciousness. ?We then test many similar hypotheses with many similar experiments until we have a complete working hypothesis to explain the NCC. But you claim that it is possible to make p-neurons which function like normal neurons but, being computerised, lack the NCC, and putting these neurons into region A as replacements will not cause the patient to fall to the ground unconscious. So if you see in your experiments the patient losing consciousness, or any other behavioural change, that must be due to something computable, and therefore not the NCC. The essential function of the NCC is to prevent the patient from being a zombie, and you can never observe this in an experiment. -- Stathis Papaioannou From mbb386 at main.nc.us Tue Jan 5 12:03:37 2010 From: mbb386 at main.nc.us (MB) Date: Tue, 5 Jan 2010 07:03:37 -0500 (EST) Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com><4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: <36087.12.77.168.194.1262693017.squirrel@www.main.nc.us> > typewriter: you stumped me on that one, never heard of it. > > Actually I must be honest and disqualify myself from this one, for I am one > who not only knows what is a typewriter, but actually used one, in college. > I can out-geezer almost everyone here by having used the kind (in college!) > which does not plug in to the wall. Perhaps only Damien can match this, > methinks. > Methinks you're still young! I learned on such a thing in highschool. It was a Royal, IIRC. It's one reason I *really* like the older clicky IBM keyboards, they've got the right sound, although one need not press so hard on the keys. The typewriter we had at home was an Underwood and it was black with round head keys that had silver metal rims. Very attractive looking machine. Did you have plain caps to go over the keys to teach you to "touch type" rather than look? Regards, MB From mbb386 at main.nc.us Tue Jan 5 12:08:04 2010 From: mbb386 at main.nc.us (MB) Date: Tue, 5 Jan 2010 07:08:04 -0500 (EST) Subject: [ExI] effect/affect again. In-Reply-To: <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> Message-ID: <36093.12.77.168.194.1262693284.squirrel@www.main.nc.us> Rafal writes: > > Now beat this: I have helped my mother wring laundry using a > hand-crank operated mangle attached to the non-automatic washing > machine we had in the early 70's. > When I was a small child I got the tips of my fingers nipped by one of these wringers. Yeouch! I thought it so cool, I'd turn the handle and watch the rollers - and of course I touched them while I was turning and ...... oh well. ;) We had clothesline strung all across the laundry area and the clothes were hung up to dry when the weather was too bad to take them outdoors. Regards, MB From mbb386 at main.nc.us Tue Jan 5 12:09:47 2010 From: mbb386 at main.nc.us (MB) Date: Tue, 5 Jan 2010 07:09:47 -0500 (EST) Subject: [ExI] mangle In-Reply-To: <4B429FA4.2000704@satx.rr.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> <4B429FA4.2000704@satx.rr.com> Message-ID: <36095.12.77.168.194.1262693387.squirrel@www.main.nc.us> Damien writes: > Ha! I helped my mother haul out washing to the line to dry after she'd > *boiled it in the copper* (as it was called) long before we had a > washing machine. > I remember "the copper" in the laundry cupboard, but it was not used any longer, AFAIK. Regards, MB From gts_2000 at yahoo.com Tue Jan 5 12:10:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 5 Jan 2010 04:10:59 -0800 (PST) Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: Message-ID: <263605.11430.qm@web36507.mail.mud.yahoo.com> --- On Mon, 1/4/10, Aware wrote: > Compare ISTJ (Gordon, presumably, but with high certainty) > with INTJ (Jef), per the Wikipedia descriptions: Actually the last time I took that test, it called me an INTJ too. Most likely you see the influence of analytic philosophy on my intellectual life in recent years. After ignoring the analytics all my life (they looked so darned boring, and who cares about language, logic, meaning and reality anyway?) I finally took the plunge and spent the last year or two reading and surveying thinkers like Frege, Wittgenstein, Russell, Moore and others. And yes Searle also hails from that tradition. The analytics tend to value reality and common sense over lofty and often on close analysis meaningless abstractions. I no longer care to debate how many angels can dance on the head of a pin. You'll have to show me the angels first! :) -gts From jameschoate at austin.rr.com Tue Jan 5 13:27:24 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Tue, 5 Jan 2010 13:27:24 +0000 Subject: [ExI] effect/affect again. In-Reply-To: <4B42D789.2070707@rawbw.com> Message-ID: <20100105132725.9O1SC.183600.root@hrndva-web17-z01> I've been typing since 1968 when I took possession of my mothers K-Mart Lemon Yellow portable. I learned how to actually type after reading an article in the Houston Chronicle Sunday Parade by William F. Buckley Jr. The interviewer asked him what the most important skill he ever learned was and how he learned it. He said typing and he'd. He put a layout of the keyboard on a 3x5 card and taped it to the top of the keyboard. Took him a couple of weeks to learn to touch type. I tried it and it worked. I've suggested it to others and they find it works as well. I still have little Russian keyboards taped to my laptop screen bezel when I have to type Russian and forget where keys are at. It really works well. ---- Lee Corbin wrote: > spike wrote: > > > typewriter: you stumped me on that one, never heard of it. > > > > Actually I must be honest and disqualify myself from this one, for I am one > > who not only knows what is a typewriter, but actually used one, in college. > > I can out-geezer almost everyone here by having used the kind (in college!) > > which does not plug in to the wall. Perhaps only Damien can match this, > > methinks. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From gts_2000 at yahoo.com Tue Jan 5 13:31:30 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 5 Jan 2010 05:31:30 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <200675.1060.qm@web36501.mail.mud.yahoo.com> --- On Tue, 1/5/10, Stathis Papaioannou wrote: > Is there only one type of c-neuron or is it possible to > insert m-neurons which, though they are functionally identical to > b-neurons, result in a different kind of consciousness? I don't know what you mean by "different kind of consciousness". I will say this: if m-neurons cure our man Sam and he then takes LSD, it will affect his conscious experience just as it would for anyone else. > Philosophy has to give an answer that's in accordance with > what would actually happen, what you would actually experience, > otherwise it's worse than useless. The discussion we have been having > is an example of a philosophical problem with profound practical > consequences. If I get a new super-fast computerised brain and you're > right I would be killing myself, whereas if I'm right I would become an > immortal super-human. I think it's important to be sure of the > answer before going ahead! True. On the other hand perhaps you could view it something like Pascal's wager. You have little to lose by believing in digital immortality. If it doesn't work for you then you won't know about it. And you'll never know the truth from talking to the zombies who've tried it. When you ask them, they always say it worked just fine. > You shouldn't dismiss the slippery slope argument so quickly. Either > you suddenly become a zombie when a certain proportion of your neurons > internal workings are computerised or you don't. If you don't, then the > option is that you don't become zombified at all or that you become > zombified in proportion to how much of the neurons is computerised. > Either sudden or gradual zombification seems implausible > to me. Gradual zombification seems plausible to me. In fact we've already discussed this same problem but with a different vocabulary. A week or two ago, I allowed that negligible formal programmification (is that a word?) of real brain processes would result only in negligible loss of intentionality. >> I think that after the initial operation he becomes a >> complete basket-case requiring remedial surgery, and that in >> the end he becomes a philosophical zombie or something very >> close to one. If his surgeon has experience then he becomes >> a zombie or near zombie on day one. > > I don't understand why you say this. Perhaps I haven't > explained what I meant well. The p-neurons are drop-in replacements > for the b-neurons, just like pulling out the LM741 op amps in a > piece of audio equipment and replacing them with TL071's. The TL071 > performs the same function as the 741 and has the same pin-out, so the > equipment will function just the same You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism. If you accept epiphenomenalism and reject the common and in my opinion much more sensible view that experience affects behavior, including neuronal behavior, then we need to discuss that philosophical problem before we can go forward. It looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. By the way this conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he might otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons wherever necessary in his brain until his patient appears ready for life on the streets, zombifying him in the process. > Now, isn't it clear from this that Cram must behave > normally and must (at least) have normal experiences in the parts of > his brain which aren't replaced No, see above. > If Cram has neurons in his language centre replaced then he > must be able to communicate normally and respond to verbal input > normally in every other way: draw a picture, laugh with genuine > amusement at a joke, engage in philosophical debate. He must also > genuinely believe that he understands everything, since if he didn't > he would tell us. No he would not tell us! The surgeon programmed Cram to behave normally and to lie about his subjective experience, all the while believing naively that his efforts counted as cures for the symptoms and side-effects his patient reported. Philosophical zombies have no experience. They know nothing whatsoever, but they lie about it. -gts From gts_2000 at yahoo.com Tue Jan 5 13:54:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 5 Jan 2010 05:54:36 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <670704.9608.qm@web36501.mail.mud.yahoo.com> --- On Tue, 1/5/10, Stathis Papaioannou wrote: > But you claim that it is possible to make p-neurons which > function like normal neurons but, being computerised, lack the NCC, > and putting these neurons into region A as replacements will not cause > the patient to fall to the ground unconscious. No, I make no such claim. Cram's surgeon will no doubt find a way to keep the man walking, even if semantically brain-dead from the effective lobotomization of his Wernicke's and related. -gts From stathisp at gmail.com Tue Jan 5 14:39:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jan 2010 01:39:27 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <670704.9608.qm@web36501.mail.mud.yahoo.com> References: <670704.9608.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/6 Gordon Swobe : > --- On Tue, 1/5/10, Stathis Papaioannou wrote: > >> But you claim that it is possible to make p-neurons which >> function like normal neurons but, being computerised, lack the NCC, >> and putting these neurons into region A as replacements will not cause >> the patient to fall to the ground unconscious. > > No, I make no such claim. Cram's surgeon will no doubt find a way to keep the man walking, even if semantically brain-dead from the effective lobotomization of his Wernicke's and related. Well, Searle makes this claim. He says explicitly that the behaviour of a brain can be simulated by a computer, and invokes Church's thesis in support of this. However, he claims the simulated brain won't have consciousness, and will result in a philosophical zombie. Perhaps there is some confusion because Searle is talking about simulating a whole brain, not a neuron, but if you can make a zombie brain it should certainly be possible to make a zombie neuron. That's what a p-neuron is: it acts just like a b-neuron, the b-neurons around it think it's a b-neuron, but because it's computerised, you claim, it lacks the essentials for consciousness. By definition, if the p-neurons function as advertised they can be swapped for the equivalent b-neuron and the person will behave exactly the same and honestly believe that nothing has changed. If you *don't* believe p-neurons like this are possible then you disagree with Searle. Instead, you believe that there is some aspect of brain physics that is uncomputable, and therefore that weak AI and philosophical zombies may not be possible. This is a logically consistent position, while Searle's is not. However, there is no scientific evidence that the brain uses uncomputable physics. -- Stathis Papaioannou From thespike at satx.rr.com Tue Jan 5 16:06:06 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 10:06:06 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: References: <201001050755.o057tCga000596@andromeda.ziaspace.com> Message-ID: <4B43636E.7020206@satx.rr.com> On 1/5/2010 3:59 AM, BillK wrote: > How about these descriptions? Hey, I'm *all* of them! How can that be? Give me an abstract theory about it, quick! Damien Broderick From jonkc at bellsouth.net Tue Jan 5 16:47:00 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 5 Jan 2010 11:47:00 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <263605.11430.qm@web36507.mail.mud.yahoo.com> References: <263605.11430.qm@web36507.mail.mud.yahoo.com> Message-ID: <9FFFB279-C2B6-448A-BAFC-CA7FDBDED411@bellsouth.net> On Jan 5, 2010 Gordon Swobe wrote: > Gradual zombification seems plausible to me. Yes I know it does, however zombification would not seem plausible to you if you understood Darwin's Theory of Evolution. > I finally took the plunge and spent the last year or two reading and surveying thinkers like Frege, Wittgenstein, Russell, Moore and others. And yes Searle also hails from that tradition. The two greatest philosophical discoveries of the 20'th century were Quantum Mechanics and Godel's Incompleteness Theorem, philosophers did not discover either of them. In fact Wittgenstein probably didn't even read Godel's 1931 paper until 1942 and when he did comment on it, in a article published after his death, he said Godel's paper was just a bunch of tricks of a logical conjurer. He seemed to think that prose could disprove a mathematical proof; even many of Wittgenstein's fans are embarrassed by his last a article. And by the way, the greatest philosophical discovery of the 19'th century was Darwin's Theory of Evolution and that also did not involve philosophers. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jan 5 17:49:36 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 11:49:36 -0600 Subject: [ExI] quantum brains In-Reply-To: <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> Message-ID: <4B437BB0.2020306@satx.rr.com> On 1/5/2010 5:11 AM, Stefano Vaj wrote: >> > In order to do so, it would have to consider a superposition of every possible response and collapse the ouput "wavefunction" on the most appropriate response. > > *If* organic brains actually do some quantum computing. Now, I still > have to see any human being solving a typical quantum computing > problem with a pencil and a piece of paper... ;-) I suppose it's possible that some autistic lightning calculators do that. But I've read arxiv papers recently arguing that photosynthesis functions via entanglement, so something that basic might be operating in other bio systems. And of course since I'm persuaded that some psi phenomena are real, *something* weird as shit is needed to account for them, something that can either do stupendous simulations in multiple worlds/superposed states, or can modify its state according to outcomes in the future. If that's not QM, it's something equally hair-raising that electronic computers aren't built to do. Damien Broderick From max at maxmore.com Tue Jan 5 17:54:01 2010 From: max at maxmore.com (Max More) Date: Tue, 05 Jan 2010 11:54:01 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> BillK wrote: >As Ben mentioned, these personality types come >out sounding like astrology descriptions. (Don't >say anything bad in case you drive the paying >customers away). The fact that dating and >match-making sites use them is another minus point. Many businesses use them also, especially the MBTI. I find this personality typing system interesting and intuitively plausible, but regard it with a low degree of confidence. The comparison to astrological descriptions is not unreasonable, though the MBTI tying seem more specific and predictive. I've read and reviewed a number of books and papers on the topic, but have not yet come across a good test of the MBTI. Here are some of my relevant reviews: Astrology and Alchemy ? The Occult Roots of the MBTI European Business Forum published on 03/01/2004 http://www.manyworlds.com/exploreco.aspx?coid=CO5240417484857 Personality Plus The New Yorker published on 09/20/2004 http://www.manyworlds.com/exploreco.aspx?coid=CO112404141215 The Cult of Personality http://www.manyworlds.com/exploreco.aspx?coid=CO5270512997 Please Understand Me II: Temperament, Character, Intelligence by David Keirsey http://www.manyworlds.com/exploreco.aspx?coid=CO814013273511 Personality Tests: Back With a Vengeance [] by Alison Overholt http://www.manyworlds.com/exploreco.aspx?coid=CO11150412584917 ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From thespike at satx.rr.com Tue Jan 5 18:20:37 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 12:20:37 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> References: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> Message-ID: <4B4382F5.1010207@satx.rr.com> On 1/5/2010 11:54 AM, Max More wrote: > The comparison to astrological descriptions is not unreasonable, though > the MBTI tying seem more specific and predictive. That's not the salient distinction, though, Max. It's that MBTI captures your own assessment of *how you are and how you function*--while astrology bogusly claims to extrapolate all those data from your sun sign and planetary configurations at birth (not even at conception). It's barely possible that babies of a certain genetically-controlled, or developmentally-shaped, constitution will be born during a certain season, or somehow be sensitive to some cosmic condition that triggers hormones that provoke labor at a certain time of night, or plus or minus N days, but that's madly speculative and far too general in any case. The fact that MBTI gets almost everything right about me and my wife is obviously due to the fact that it does capture what we regard as the crucial elements of our attitudes, drives, behavior, and feeds that back to us in a neat summary, together with purportedly empirical information on how people of our type will get on with other kinds of humans. Astrological systems could probably do that too, if you were allowed to browse through the descriptors and choose what "sign" you are, with the actual constellations etc entirely irrelevant (as they almost certainly are, except for the seasonal aspect mentioned above). Damien Broderick From thespike at satx.rr.com Tue Jan 5 18:29:28 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 12:29:28 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <4B4382F5.1010207@satx.rr.com> References: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> <4B4382F5.1010207@satx.rr.com> Message-ID: <4B438508.2000706@satx.rr.com> On 1/5/2010 12:20 PM, I wrote: > Astrological systems could probably do that too, if you were allowed to > browse through the descriptors and choose what "sign" you are, with the > actual constellations etc entirely irrelevant (as they almost certainly > are, except for the seasonal aspect mentioned above). Hmm, so what sun sign is closest to INTJ? I want to adopt it. I'll gladly change my birthday. Most posters here could take the same day, I imagine. What a party! Damien Broderick From max at maxmore.com Tue Jan 5 18:35:51 2010 From: max at maxmore.com (Max More) Date: Tue, 05 Jan 2010 12:35:51 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: <201001051836.o05Ia0YN021706@andromeda.ziaspace.com> Apart from the summary of studies on the Wikipedia page, the following source provides an interesting critique: http://www.bmj.com/cgi/eletters/328/7450/1244#60169 Max From pharos at gmail.com Tue Jan 5 19:43:40 2010 From: pharos at gmail.com (BillK) Date: Tue, 5 Jan 2010 19:43:40 +0000 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <4B43636E.7020206@satx.rr.com> References: <201001050755.o057tCga000596@andromeda.ziaspace.com> <4B43636E.7020206@satx.rr.com> Message-ID: On 1/5/10, Damien Broderick wrote: > Hey, I'm *all* of them! How can that be? Give me an abstract theory about > it, quick! > > I've just returned to my keyboard and reread my email and I feel that I might have to apologize to all our INTJ readers. Really, you're all lovely and the salt of the earth. We couldn't do without you and think you are the greatest thing since sliced bread. The alternative descriptions were just a bit of fun intended to point out possible flaws in the personality analysis system. Damien, the reason you think you're *all* of them might be because you are a *really* strange personality, :) but more likely it is because of the generalized way they are written. Most people have a very wide range of characteristics and behaviors and can see themselves in all of them, some of the time. Even homicidal dictators might like dogs and do painting or design hobbies. It is very difficult to concentrate on being a homicidal maniac *all* of the time. (Or so I find). BillK From stathisp at gmail.com Tue Jan 5 21:25:32 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jan 2010 08:25:32 +1100 Subject: [ExI] quantum brains In-Reply-To: <4B437BB0.2020306@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> Message-ID: <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> On 06/01/2010, at 4:49 AM, Damien Broderick wrote: >> > And of course since I'm persuaded that some psi phenomena are real, > *something* weird as shit is needed to account for them, something > that can either do stupendous simulations in multiple worlds/ > superposed states, or can modify its state according to outcomes in > the future. If that's not QM, it's something equally hair-raising > that electronic computers aren't built to do. That would make mind uploading impossible. It might still be possible to replicate a mind, but it wouln't have all the advantages of software. -- Stathis Papaioannou From jonkc at bellsouth.net Tue Jan 5 21:06:05 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 5 Jan 2010 16:06:05 -0500 Subject: [ExI] quantum brains In-Reply-To: <4B437BB0.2020306@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> Message-ID: On Jan 5, 2010, at 12:49 PM, Damien Broderick wrote: > On 1/5/2010 5:11 AM, Stefano Vaj wrote: >> >> *If* organic brains actually do some quantum computing. Now, I still >> have to see any human being solving a typical quantum computing >> problem with a pencil and a piece of paper... ;-) > > I suppose it's possible that some autistic lightning calculators do that. But even the grandest of these lightning calculators are no match for a conventional non-quantum computer. I just calculated the first 707 digits of PI on my iMAC, it took exactly .000426 seconds, and my iMAC isn't even top of the line anymore, although it was for about 15 minutes. In 1873 the mathematician William Shanks died and had spent the last 20 years of his life doing the exact same calculation. No that isn't correct, it isn't the exact same calculation; my iMac figured the correct numbers but poor Mr. Shanks made an error at digit 527, rendering all further digits and the final 5 years of his life worthless. Fortunately he never learned of his error, nobody did till 1958 when a computer spotted it. it takes my machine .000021 seconds to calculate 527 digits of PI. > I'm persuaded that some psi phenomena are real I know this will greatly surprise you but I don't entirely agree with you about that. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jan 5 22:10:00 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 16:10:00 -0600 Subject: [ExI] quantum brains In-Reply-To: <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> Message-ID: <4B43B8B8.3030202@satx.rr.com> On 1/5/2010 3:25 PM, Stathis Papaioannou wrote: > That would make mind uploading impossible. It might still be possible to > replicate a mind, but it wouln't have all the advantages of software. Yes, it's a disheartening thought. Unless minds are already being copied on a time-sharing entanglement basis through whatever medium psi operates in--which opens the way to a sort of version of (QT-instantiated) souls, maybe, causing John Clark to give up on me finally as a hopeless lost cause. Norman Spinrad wrote a short novel about uploading, DEUS X (1993), in which to my amazement he took the line that it's impossible and bad for your health and just creates zombie cartoon replicas. That's sf writers for ya--we'll consider *anything*... Damien Broderick From gts_2000 at yahoo.com Wed Jan 6 12:59:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 6 Jan 2010 04:59:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <499450.65413.qm@web36505.mail.mud.yahoo.com> --- On Tue, 1/5/10, Stathis Papaioannou wrote: >> No, I make no such claim. Cram's surgeon will no doubt >> find a way to keep the man walking, even if semantically >> brain-dead from the effective lobotomization of his >> Wernicke's and related. > > Well, Searle makes this claim. I don't think Searle ever considered a thought experiment exactly like the one we created here. In any case, in this experiment, I simply deny your claim that my position entails that the surgeon cannot keep the man walking. The surgeon starts with a patient with a semantic deficit caused by a brain lesion in Wernicke's area. He replaces those damaged b-neurons with p-neurons believing just as you do that they will behave and function in every respect exactly as would have the healthy b-neurons that once existed there. However on my account of p-neurons, they do not resolve the patient's symptoms and so the surgeon goes back in to attempt more cures, only creating more semantic issues for the patient. The surgeon keeps patching the software so to speak until finally the patient does speak and behave normally, not realizing that each patch only further compromised his patient's intentionality. In the end he succeeds in creating a patient who reports normal experiences and passes the Turing test, oblivious to the fact that the patient also now has little or no experience of understanding words assuming he has any experience at all. -gts > Perhaps > there is some confusion because Searle is talking about > simulating a > whole brain, not a neuron, but if you can make a zombie > brain it > should certainly be possible to make a zombie neuron. > That's what a > p-neuron is: it acts just like a b-neuron, the b-neurons > around it > think it's a b-neuron, but because it's computerised, you > claim, it > lacks the essentials for consciousness. By definition, if > the > p-neurons function as advertised they can be swapped for > the > equivalent b-neuron and the person will behave exactly the > same and > honestly believe that nothing has changed. > > If you *don't* believe p-neurons like this are possible > then you > disagree with Searle. Instead, you believe that there is > some aspect > of brain physics that is uncomputable, and therefore that > weak AI and > philosophical zombies may not be possible. This is a > logically > consistent position, while Searle's is not. However, there > is no > scientific evidence that the brain uses uncomputable > physics. > > > -- > Stathis Papaioannou > From gts_2000 at yahoo.com Wed Jan 6 14:28:18 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 6 Jan 2010 06:28:18 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: <4B42D3AC.6090504@rawbw.com> Message-ID: <191400.54764.qm@web36501.mail.mud.yahoo.com> Jef, > Argh,"turtles all the way down", indeed.?Then must nature also compute > the infinite expansion of the digits of pi for every soap bubble as well? Your question assumes that nature actually performs soap bubble computations somewhere as if on some Divine Universal Turing Machine. I don't think we have any good reason to believe so. -gts From stathisp at gmail.com Wed Jan 6 14:32:05 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jan 2010 01:32:05 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <499450.65413.qm@web36505.mail.mud.yahoo.com> References: <499450.65413.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/6 Gordon Swobe : > --- On Tue, 1/5/10, Stathis Papaioannou wrote: > >>> No, I make no such claim. Cram's surgeon will no doubt >>> find a way to keep the man walking, even if semantically >>> brain-dead from the effective lobotomization of his >>> Wernicke's and related. >> >> Well, Searle makes this claim. > > I don't think Searle ever considered a thought experiment exactly like the one we created here. He did, and I finally found the reference. It was in his 1992 book, "The Rediscovery of the Mind", pp 66-67. Here is a quote: <...as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when the doctors test your vision, you hear them say, "We are holding up a red object in front of you; please tell us what you see." You want to cry out, "I can't see anything. I'm going totally blind." But you hear your voice saying in a way that is completely out of your control, "I see a red object in front of me." If we carry the thought-experiment out to the limit, we get a much more depressing result than last time. We imagine that your conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same.> He is discussing here the replacement of neurons in the visual cortex with functionally identical computer chips. He agrees that it is possible to make functionally identical computerised neurons because he accepts that physics is computable. He agrees that these p-neurons will interact normally with the remaining b-neurons because they are, by definition, functionally identical. He agrees that the behaviour of the whole brain will continue as per normal because this also follows necessarily if the p-neurons and remaining b-neurons behave normally. However, he believes that consciousness will become decoupled from behaviour: the patient will become blind, will realise he is blind and try to cry out, but he will hear himself saying that everything is normal and will be powerless to do anything about it. That would only be possible if the patient is doing his thinking with something other than his brain, although it doesn't seem that Searle realised this, since he has always claimed that thinking is done with the brain and there is no immaterial soul. > In any case, in this experiment, I simply deny your claim that my position entails that the surgeon cannot keep the man walking. > > The surgeon starts with a patient with a semantic deficit caused by a brain lesion in Wernicke's area. He replaces those damaged b-neurons with p-neurons believing just as you do that they will behave and function in every respect exactly as would have the healthy b-neurons that once existed there. However on my account of p-neurons, they do not resolve the patient's symptoms and so the surgeon goes back in to attempt more cures, only creating more semantic issues for the patient. Can you explain why you think the p-neurons won't be functionally identical? It seems that you do believe (unlike Searle) that there is something about neuronal behaviour that is not computable, otherwise there would be nothing preventing the creation of p-neurons that are drop-in replacements for b-neurons, guaranteed to leave behaviour unchanged. As I have said before, this is a logically consistent position; it would mean p-neurons, weak AI, the Chinese Room and philosophical zombies might all be impossible. It is a scientific rather than a philosophical question whether the brain utilises uncomputable physics, and the standard scientific position is that it doesn't. -- Stathis Papaioannou From aware at awareresearch.com Wed Jan 6 16:11:42 2010 From: aware at awareresearch.com (Aware) Date: Wed, 6 Jan 2010 08:11:42 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <191400.54764.qm@web36501.mail.mud.yahoo.com> References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Wed, Jan 6, 2010 at 6:28 AM, Gordon Swobe wrote: > Jef, > >> Argh,"turtles all the way down", indeed.?Then must nature also compute >> the infinite expansion of the digits of pi for every soap bubble as well? > > Your question assumes that nature actually performs soap bubble computations somewhere as if on some Divine Universal Turing Machine. I don't think we have any good reason to believe so. I don't make that assumption. I was offering it as a reductio ad absurdum applicable (I think) to your insistence that "consciousness" is an intrinsic property of some brains, but that it is absent at any particular level of description. I have to admit I've lost track of all the phases of your argument since you and Stathis have gone around and around so many times, and the whole thing tends to evaporate in my mind since the problem, as formulated, can't be modeled (It can't be coherently stated.) As I've said already (three times in this thread) it seems that everyone here (and Searle) would agree with the functionalist position: that perfect copies must be identical, and thus functionalism needs no defense. Stathis continues to argue on the basis of functional identity, since he doesn't seem to see how there could be anything more to the question. [I know Stathis had a copy of Hofstadter's _I AM A STRANGE LOOP_, but I suspect he didn't finish it.] John Clark continues to argue on the more abstract basis that evolutionary processes don't produce intelligence without consciousness, which, in my opinion is flawed, since one can point to examples of evolved "intelligence"--organisms acting with appropriate prediction and control--yet lacking that extra evolutionary layer providing awareness and thus modeling of "self", but when pinned down John appears to go to either limit: Mr. Jupiter Brain wouldn't be very smart if he didn't model himself, or the other (panpsychist) view that even an amoeba has consciousness, but just an eensy teensy bit. And you continue to argue along the lines of Searle that since we KNOW (from indisputable 1st-person evidence) that conscious experience (including qualia, meaning, intentionality) EXIST, and since we are hard-core functionalists and materialists and can see upon close inspection (however close we might care to look) that there is no place within the formally described system in which such qualia/meaning/intentionality are produced, then there MUST be some extra ingredient, essential to consciousness, of which we are yet unaware. And I've already offered that, despite the seductively strong intuition, reinforced by our nature, language and culture, that these phenomena of qualia/meaning/intensionality are real, undeniable, intrinsic properties of at least certain organisms including most human beings, that there is actually no need for any mysterious extra ingredient. The "mysterious" phenomena are adequately and parsimoniously explained in terms of the (recursive) relationship of the observer to the observed. Of course "we" refer to "ourselves" in this way. So in a sense, the panpsychists got it pretty close, except inside-out and with the assumption of an ontological "consciousness" that isn't necessary. Actually NOTHING has this assumed essential conscious, but EVERYTHING expresses self-awareness, and will necessarily report 1st-person experience, to the extent that its functional nature implements a reflective model of itself. What more is there to say? - Jef From jonkc at bellsouth.net Wed Jan 6 17:32:24 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 6 Jan 2010 12:32:24 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> On Jan 6, 2010, Aware wrote: > John Clark continues to argue on the more abstract basis that > evolutionary processes don't produce intelligence without > consciousness, which, in my opinion is flawed, since one can point to > examples of evolved "intelligence"--organisms acting with appropriate > prediction and control--yet lacking that extra evolutionary layer > providing awareness and thus modeling of "self" The trouble with all these discussions is that people point to things and say, look at that (computer made of beer cans, Chinese room, ameba, or whatever) and say that's intelligent but *OBVIOUSLY* it's not conscious; but it is not obvious at all and in fact they have absolutely no way of knowing it is true. If you show me something and call it "intelligent" then I can immediately call it conscious and don't even need to express reservations on the use of the word with quotation marks as you did because we learned from the history of Evolution that consciousness is easy but intelligence is hard. > when pinned down John appears to go to either limit: Mr. Jupiter Brain wouldn't be > very smart if he didn't model himself Yes. > or the other (panpsychist) view that even an amoeba has consciousness, but just an eensy teensy bit. If an amoeba is a eensy bit intelligent then it's two eensy bits conscious. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Wed Jan 6 18:20:30 2010 From: x at extropica.org (x at extropica.org) Date: Wed, 6 Jan 2010 10:20:30 -0800 Subject: [ExI] Some new angle about AI. In-Reply-To: <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> Message-ID: 2010/1/6 John Clark : > On Jan 6, 2010, Aware wrote: > The trouble with all these discussions is that people point to things and > say, look at that (computer made of beer cans, Chinese room, ameba, or > whatever) and say that's intelligent but *OBVIOUSLY* ?it's not conscious; > but it is not obvious at all and in fact they have absolutely no way of > knowing it is true. I agree that consciousness (self-awareness) is not obvious, and can only be inferred. By definition. It seems to me that you're routinely conflating "intelligence" and "consciousness", but then, oddly, you distinguish between them by saying one is much easier than the other. I AGREE that in terms of the evolutionary process that lead to the emergence of intelligence and then consciousness (self-awareness) on this planet, that the evolution of "intelligence" was a much bigger step, requiring a lot more time, than the evolution of consciousness, which is like just an additional layer of supervision. > If you show me something and call it "intelligent" then > I can immediately call it conscious and don't even need to express > reservations on the use of the word with quotation marks as you did because > we learned from the history of Evolution that consciousness is easy but > intelligence is hard. So why don't you agree with me that intelligence must have "existed" (been recognizable, if there had been an observer) for quite a long time before evolutionary processes stumbled upon the additional, supervisory, hack of self-awareness? >> when pinned down?John appears to go to either limit: ?Mr. Jupiter Brain >> wouldn't be very smart if he didn't model himself > > Yes. > >> or the other (panpsychist) view?that even an amoeba has consciousness, but >> just an eensy teensy bit. > > If an amoeba is a eensy bit intelligent then it's two eensy bits conscious. > ?John K Clark It doesn't (seem, to me) to follow at all that if an amoeba can be said to be intelligent (displays behaviors of effective prediction and control appropriate to its environment of adaptation) that it can necessarily be said to be conscious (exploits awareness of its own states and actions.) That seems to me to be an additional layer of supervisory functionality that isn't implemented in the relatively simple structure of the amoeba. You're asserting a continuous QUANTITATIVE scale of consciousness, from the amoeba (and presumably below) up to Mr. Jupiter Brain (and presumably beyond.) I'm asserting ongoing, punctuated, QUALITATIVE developments, with novel hacks like self-awareness discovered at some point, exploited for the additional fitness they confer, and eventually superseded by even newer hacks providing greater benefits over greater scope of interaction. I fully expect that self-awareness will eventually be superseded by a fractal form of hierarchical awareness. - Jef From scerir at libero.it Wed Jan 6 18:20:41 2010 From: scerir at libero.it (scerir) Date: Wed, 6 Jan 2010 19:20:41 +0100 Subject: [ExI] quantum brains In-Reply-To: <4B43B8B8.3030202@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com><1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> <4B43B8B8.3030202@satx.rr.com> Message-ID: <69DBD12AC7674C60A97A6F9152851B90@PCserafino> Damien: And of course since I'm persuaded that some psi phenomena are real, *something* weird as shit is needed to account for them, something that can either do stupendous simulations in multiple worlds/superposed states, or can modify its state according to outcomes in the future. If that's not QM, it's something equally hair-raising that electronic computers aren't built to do. # But what is the quantum? J.Wheeler said there was just a "Merlin principle" (named after the legendary magician who, when pursued, changed his form again and again). That is to say: the more we pursue the quantum, the more it changes. Here below a short list of changing, evolving concepts, rules, topics, problems. Discreteness, indeterminism, probabilities, uncertainty relations, entropic uncertainty relations, non-definiteness of values before measurements, no (in general) retrodiction, essential randomness, incompressibile randomness and undecidability, a-causality, contextuality, real/complex/quaternionic formalisms, Hilbert spaces or logical structures representation, correspondence principle, complementarity, duality, smooth transitions, quanta as carriers of limited information, second order complementarity, superpositions, entanglements, conditional entropies can be negative, algebraic non-separability, geometric non-locality, local hidden variables, non-local hidden variables, non-local hidden variables plus time arrow assumption, a-temporality, conspiracy theories, which one: free will or space-time?, time (in general) is not an observable, quantum interferences, Feynman rules, indistinguishability, erasure of indistinguishability, second order interferences, quantum dips, quantum beats, interferences in time, fractal revivals, ghost imaging, from potentiality to actuality via measurements, objective reduction of wave-packet, subjective reduction of wave-packet, pre-measurements, weak measurements, interaction-free measurements, two-time symmetric quantum theory, no-cloning principle, no-deleting principle, no-signaling principle (relativistic causality), are there negative probabilities?, de-coherence, sum-over-paths, beables, many-worlds, many-and-consistent-histories, the transactional, and so on, and on, and on. Is there also a "superquantum" domain? There is for sure since Sandu Popescu and Daniel Rohrich wrote 'Quantum Nonlocality as an Axiom' (in Foundations of Physics, Vol. 24, No. 3, 1994). Essentially it is the domain of superquantum correlations, stronger than the usual quantum correlations. As we all know John Bell Bell proved that quantum entanglement enables two space-like separated parties to exhibit classically impossible correlations. Even though these correlations are stronger than anything classically achievable, they cannot be harnessed to make instantaneous (faster than light) communication possible. Yet, Popescu and Rohrlich have shown that even stronger correlations can be defined, under which instantaneous communication remains *impossible* (relativistic causality is safe). This raises the question: Why are the correlations achievable by quantum mechanics not maximal among those that preserve relativistic causality? There are no good answers to this question. But it is possible to show that superquantum correlations would result in a world in which the so called 'communication complexity' becomes 'trivial' [1] but 'magic' [2] [3]. So, good news for the SF writers. Or it seems so. s. [1] Assume Alice and Bob wish to compute some Boolean function f(x, y) of input x, known to Alice only, and input y, known to Bob only. Their concern is to minimize the amount of (classical) communication required between them for Alice to learn the answer. It is clear that this task cannot be accomplished without at least some communication (even if Alice and Bob share prior entanglement), unless f(x, y) does not actually depend on y, because otherwise instantaneous signalling would be possible. Thus, we say that the communication complexity of f is 'trivial' if the problem can be solved with a single bit of communication (a single bit of communication also protects relativistic causality). [2] A nonlocal box is an imaginary device that has an input-output port at Alice's and another one at Bob's, even though Alice and Bob can be space-like separated. Whenever Alice feeds a bit x into her input port, she gets a uniformly distributed random output bit a, locally uncorrelated with anything else, including her own input bit. The same applies to Bob, whose input and output bits we call y and b, respectively. The "magic" appears in the form of a correlation between the pair of outputs and the pair of inputs. Much like the correlations that can be established by use of quantum entanglement. This device (nonlocal box, also named PR box) is a-temporal. Alice gets her output as soon as she feeds in her input, regardless of if and when Bob feeds in his input, and vice versa. Also inspired by entanglement, this is a one-shot device. The correlation appears only as a result of the first pair of inputs fed in by Alice and Bob, respectively. [3] There is some literature, in example: http://arxiv.org/abs/0907.3584 http://arxiv.org/abs/quant-ph/0501159 From sjatkins at mac.com Wed Jan 6 18:45:37 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 06 Jan 2010 10:45:37 -0800 Subject: [ExI] atheism In-Reply-To: <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> Message-ID: <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> On Dec 28, 2009, at 5:03 AM, Stefano Vaj wrote: > 2009/12/28 Samantha Atkins > There is ample evidence that belief regardless of evidence or argument is harmful. > > Mmhhh. I would qualify that as an opinion of a moral duty to swear on the truth of unproved or disproved facts. > > This has something to do with the theist objection that positively believing that Allah does not "exist" would be a "faith" on an equally basis as their own. That is poor reasoning. It is not a "positive" believe at all. It is not even a belief at all. It is not believing in a positive belief for which there is no evidence. Now, if the formulation of "Allah" is actually contradictory then we can go to a stronger logical position of pointing out that such is impossible. > > Now, I may well be persuaded that my cat is sleeping in the other room even though no final evidence of the truth of my opinion thereupon is (still) there, and to form thousand of such provisional or ungrounded - and often wrong - beliefs is probably inevitable. But would I claim that such circumstances are a philosophical necessity or of ethical relevance? Obviously not... Poor analogy. We know that cats exist and that states such as sleeping exist. We know no such things about gods or that putative states. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Wed Jan 6 18:53:34 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 06 Jan 2010 10:53:34 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <4B438508.2000706@satx.rr.com> References: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> <4B4382F5.1010207@satx.rr.com> <4B438508.2000706@satx.rr.com> Message-ID: On Jan 5, 2010, at 10:29 AM, Damien Broderick wrote: > On 1/5/2010 12:20 PM, I wrote: > >> Astrological systems could probably do that too, if you were allowed to >> browse through the descriptors and choose what "sign" you are, with the >> actual constellations etc entirely irrelevant (as they almost certainly >> are, except for the seasonal aspect mentioned above). > > Hmm, so what sun sign is closest to INTJ? I want to adopt it. I'll gladly change my birthday. Most posters here could take the same day, I imagine. What a party! > Well, any system of a sufficient number of variables can be mapped onto any other system reasonably described by that number or less variables. Since humans are notoriously limited in the number of variables they can simultaneously consider and since our minds by design jump to find patterns, even where there aren't any, it is easy to see how these systems arise and perpetuate themselves. They usually fall apart at the level of claiming they are predictive although even there humans are so muddle headed they will try to fit actual experience to previous now contradicted prediction. - samantha From sjatkins at mac.com Wed Jan 6 18:56:56 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 06 Jan 2010 10:56:56 -0800 Subject: [ExI] quantum brains In-Reply-To: <4B43B8B8.3030202@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> <4B43B8B8.3030202@satx.rr.com> Message-ID: <85365026-7F7B-4897-B4C9-B803D198A64E@mac.com> On Jan 5, 2010, at 2:10 PM, Damien Broderick wrote: > On 1/5/2010 3:25 PM, Stathis Papaioannou wrote: > >> That would make mind uploading impossible. It might still be possible to >> replicate a mind, but it wouln't have all the advantages of software. > > Yes, it's a disheartening thought. Unless minds are already being copied on a time-sharing entanglement basis through whatever medium psi operates in--which opens the way to a sort of version of (QT-instantiated) souls, maybe, causing John Clark to give up on me finally as a hopeless lost cause. Nature came up with mysterious ju-ju X but humans, a product of ju-ju X can't build anything else also incorporating X. Yeah, right. - s From thespike at satx.rr.com Wed Jan 6 20:10:01 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 06 Jan 2010 14:10:01 -0600 Subject: [ExI] quantum brains In-Reply-To: <85365026-7F7B-4897-B4C9-B803D198A64E@mac.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> <4B43B8B8.3030202@satx.rr.com> <85365026-7F7B-4897-B4C9-B803D198A64E@mac.com> Message-ID: <4B44EE19.9060109@satx.rr.com> On 1/6/2010 12:56 PM, Samantha Atkins wrote: >>> >> That would make mind uploading impossible. It might still be possible to >>> >> replicate a mind, but it wouln't have all the advantages of software. >> > >> > Yes, it's a disheartening thought. Unless minds are already being copied on a time-sharing entanglement basis through whatever medium psi operates in--which opens the way to a sort of version of (QT-instantiated) souls, maybe > > Nature came up with mysterious ju-ju X but humans, a product of ju-ju X can't build anything else also incorporating X. Yeah, right. That's not the argument. If natural selection stumbled on some sort of entanglement thingee that subserves consciousness and perhaps psi, it is quite plausible that making a mechanical brain out of beer cans and toilet paper or integrated circuits *just isn't using the right kind of stuff to instantiate a conscious mind.* Sure, we could reverse-engineer the process using the right kind of stuff (bioengineered up, maybe, or adiabatic quantum computers, or something as yet undreamed of) but it would still mean that linear electronic computers are a dead end *for consciousness* even if they are wizardly with computations and can beat your pants off at chess or Go or driving a racing car up Mount Everest. Damien Broderick From stefano.vaj at gmail.com Wed Jan 6 20:26:47 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 6 Jan 2010 21:26:47 +0100 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> Message-ID: <580930c21001061226j46722e1eo43f7dbe839395cc8@mail.gmail.com> 2010/1/2 Stathis Papaioannou : > But organic brains do better than computers at the highest level of > mathematical creativity. The "highest level of mathematical creativity" is however only too often anthropomorphically defined as what is difficult to replicate at a given historical moment. Once upon a time, idiots savants doing several digit arithmetics appeared the top of "intelligence". Then chess became the paradigm of human rational thought. Then calculus. Then... I think that Wolphram's A New Kind of Science contains important pieces of insight in this respect. However, what we know organic brains being *very* bad at doing is quantum computation... -- Stefano Vaj From stefano.vaj at gmail.com Wed Jan 6 20:39:10 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 6 Jan 2010 21:39:10 +0100 Subject: [ExI] atheism In-Reply-To: <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> Message-ID: <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> 2010/1/6 Samantha Atkins : > On Dec 28, 2009, at 5:03 AM, Stefano Vaj wrote: >> This has something to do with the theist objection that positively believing >> that Allah does not "exist" would be a "faith" on an equally basis as their >> own. > > That is poor reasoning. ?It is not a "positive" believe at all. ?It is not > even a belief at all. ?It is not believing in a positive belief for which > there is no evidence. ? Now, if the formulation of "Allah" is actually > contradictory then we can go to a stronger logical position of pointing out > that such is impossible. This is exactly my point. A belief in the non-existence of Allah is perfectly plausible, and is on an entirely different level from a belief in its existence i) because it is perfectly normal and legitimate not just avoiding to believe in the existence of unproven things, but also actually believing in their non-existence; ii) in addition, Spiderman, Thor or Sherlock Holmes may (have) exist(ed) somewhere, someplace, Allah, Jahv? etc. have some peculiar existential and definitory problems which affect any entity allegedly being distinct from the world, and located out of the time... >> Now, I may well be persuaded that my cat is sleeping in the other room even >> though no final evidence of the truth of my opinion thereupon is (still) >> there, and to form thousand of such provisional or ungrounded - and often >> wrong - beliefs is probably inevitable. But would I claim that such >> circumstances are a philosophical necessity or of ethical relevance? >> Obviously not... > > Poor analogy. ?We know that cats exist and that states such as sleeping > exist. ?We know no such things about gods or that putative states. What I mean there is that while it is perfectly normal in everyday life to believe things without any material evidence thereof (the existence of cats and sleep does not tell me anything about the current state of my cat any more than the existence of number 27 on the roulette does not provide any ground for my belief that this is the number which is going to win, and therefore on which I should bet, at the next throw of the ball), what is abnormal is to claim that such assumptions are a philosophical necessity or of ethical relevance. -- Stefano Vaj From stefano.vaj at gmail.com Wed Jan 6 20:48:57 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 6 Jan 2010 21:48:57 +0100 Subject: [ExI] quantum brains In-Reply-To: <4B437BB0.2020306@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> Message-ID: <580930c21001061248l414ae514h9eb3330cff2d0636@mail.gmail.com> 2010/1/5 Damien Broderick : > I suppose it's possible that some autistic lightning calculators do that. > But I've read arxiv papers recently arguing that photosynthesis functions > via entanglement, so something that basic might be operating in other bio > systems. > > And of course since I'm persuaded that some psi phenomena are real, > *something* weird as shit is needed to account for them, something that can > either do stupendous simulations in multiple worlds/superposed states, or > can modify its state according to outcomes in the future. If that's not QM, > it's something equally hair-raising that electronic computers aren't built > to do. Fine. Perhaps some quantum phenomenon is relevant in, say, the operations of the liver or the photosynthesis. What would suggest that it is involved as well in, say, the healing of a wound or the computation performed by organic brains (which happens *not* to exhibit any of the features of a quantum computer)? As you know, I am also inclined to believe that the evidence is more on the side of the existence of some kind of psi phenomena rather than not, even more after reading your book on the subject ;-), but there again, if it had anything with quantum features of the brain, would it rreally be a defining feature of "intelligence"? Most human beings have today access to TV broadcasting, but I think a real human being could easily pass a Turing test even though cut out from the networks. Would lack of access to such very elusive, occasional and peripheral phenomena disqualify an AGI or an uploaded human being to be perceived any differently from his or her neighbours? -- Stefano Vaj From steinberg.will at gmail.com Wed Jan 6 21:15:24 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 6 Jan 2010 16:15:24 -0500 Subject: [ExI] Psi (but read it before you don't read it) Message-ID: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> I saw that Damien was talking about psi. I don't know what most of you think about it. It is good to have a crowd that at least has an opinion on it one way or the other though. When you try to boil down what people consider psionics, it is easy to draw a line between the completely ridiculous and the somewhat ridiculous. It is hard to back up a group of people that often espouses things like pyrokinesis, so I will say here that I would only give merit to a few of the ideas; namely, telepathy, empathy, remote viewing, and precognition. What distinguishes these from the rest is that they can be completely described in terms of knowing rather than doing. Telekinesis and the like rely on the user acting, while our "soft" psi is an act of observation. Acting across space to move an object might be absurd, but given causality and maybe entanglement, knowledge is only limited by computational power. It's not incredibly difficult to imagine a causality analysis system based on observations around us. Think about it like an implicit, extended anthropic principle: If *now* is like it is, the universe must be like it is. This would allow us to communicate telepathically not by *sending* a message, but instead by *knowing*, given the surroundings, what message you will receive. Empathy, remote viewing, and precognition work the same way, using accessible data to predict inaccessible data. The biggest problems are obviously the difficulty of synthesizing this information into coherent ideas and the "causal distance," or relative triviality, of observed events with regards to the topic at hand. Why should the spin of molecules in the air, the position of the stars at night, the precise feel of gravity, give us any indication as to a completely unrelated circumstance? It would follow from this that events that are causally close to you (that are linked to you by fewer steps backwards and forwards in time, generally having closer x, y, z, t) are more easily predictable than events that are causally distant. It's easy to see that this hold true for extreme circumstances--it is easy to know when I will pick up my fork to eat my next bite of dinner, not so easy when trying to guess the weather on Venus. The middle ground is harder to justify. It seems that predicting earthly events can be as hard, if not harder, than predicting otherworldly events. But these all rely on observation. The reason it is as impossible to guess the weather in Tulsa as on Venus (or anywhere) is that the system is very independent of any actions we make. Since any informational i/o will be ridiculously garbled by a chaotic system, this will be difficult anywhere. Most things are chaotic and unrelated to us. It follows that psi cannot operate on whim or on a desired object (ask many who believe they experience these things and they will tell you it happens to them rather than their causing it); it, should it exist, is carefully limited and allotted based on what is closest and with the least amount of informational decay. Perhaps some events and ideas manage to escape being broken apart and are instead retained as material information. Or, rather, perhaps some material sets of information diverge into paths sometime in the past, happening to exist in more than one locus later in time and thus be accessible by multiple, separated people. This happens today. We can understand the possible composition of unobservable parts of the universe based on mutated information from backwards in time like CMBR or spectral analysis. These are all based on causal distance. If we receive a wave that contains a lot of information and thus is helpful for understanding, it must have interacted with less than more garbled waves, which means it *does* less before we observe it. A wave that takes its sweet causal time to get to us might not even be a wave anymore; it could be the heat I feel coming from my laptop. When something is garbled, we have to work harder to understand how it is related. The picture of the heat of the universe is expressed very directly, but if you try to deduce the fact based on element levels in a rock sample, we have to make many, many more syllogisms, through the anthropic principle, geology, physics and chemistry, before we get to the end result. So--human intuition and experimentation is a means of reversing, through math, the transformations that time and being have effected on objects we want to understand. By taking the slow route, we end up learning more about the laws of the universe, because those laws are manifested in the physical interactions that we have to follow back in time. The "psionic" approach is quicker but would seem to skip a lot of the good stuff, which also leads to a lot of problems with proof and acceptance. We all know that the brain is mathematically capable of more than one is consciously allowed, to an incredible extent. It is in the best mind to humor the idea, if only for as long enough as a sensible discussion allows. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanite1018 at gmail.com Wed Jan 6 21:31:55 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 6 Jan 2010 16:31:55 -0500 Subject: [ExI] Psi (but read it before you don't read it) In-Reply-To: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> Message-ID: <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> > I saw that Damien was talking about psi. I don't know what most of > you think about it. It is good to have a crowd that at least has an > opinion on it one way or the other though. I reject all types of this stuff because there is no conceivable way in which they could operate. The information you would have to analyze in order to gain any information about things in the future, or in other regions, or that is actually happening in someone's mind would require a sensitivity and processing capacity so far beyond what our minds are capable of it is astonishing. Our senses are very limited in terms of their acuity. Since I reject the very concept of an extra-body soul as meaningless and without ground in empirical evidence, I can see no way that any of this stuff could actually be real. Telepathy or empathy are likely a result of a Sherlock Holmesian attention to detail and an extremely good sense of body language analysis, etc. Other things, like precognition, are meaningless and are certainly the result of a combination of practical psychology (predicting what people will do based on knowledge about them), and of course luck and chance. On a related note, the metaphysical studies (i.e. astrology, psychic, occult, new age; alternatively called "crap") section of my local Borders is now approximately equal in size to the philosophy (which is no longer labeled on any of the signs) and the science sections combined. I find that sad. Joshua Job nanite1018 at gmail.com From reasonerkevin at yahoo.com Thu Jan 7 00:59:36 2010 From: reasonerkevin at yahoo.com (Kevin Freels) Date: Wed, 6 Jan 2010 16:59:36 -0800 (PST) Subject: [ExI] atheism In-Reply-To: <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> Message-ID: <504420.54703.qm@web81603.mail.mud.yahoo.com> ________________________________ From: Stefano Vaj To: ExI chat list Sent: Wed, January 6, 2010 2:39:10 PM Subject: Re: [ExI] atheism 2010/1/6 Samantha Atkins : > On Dec 28, 2009, at 5:03 AM, Stefano Vaj wrote: >> This has something to do with the theist objection that positively believing >> that Allah does not "exist" would be a "faith" on an equally basis as their >> own. > > That is poor reasoning. It is not a "positive" believe at all. It is not > even a belief at all. It is not believing in a positive belief for which > there is no evidence. Now, if the formulation of "Allah" is actually > contradictory then we can go to a stronger logical position of pointing out > that such is impossible. This is exactly my point. A belief in the non-existence of Allah is perfectly plausible, and is on an entirely different level from a belief in its existence i) because it is perfectly normal and legitimate not just avoiding to believe in the existence of unproven things, but also actually believing in their non-existence; ii) in addition, Spiderman, Thor or Sherlock Holmes may (have) exist(ed) somewhere, someplace, Allah, Jahv? etc. have some peculiar existential and definitory problems which affect any entity allegedly being distinct from the world, and located out of the time... >> Now, I may well be persuaded that my cat is sleeping in the other room even >> though no final evidence of the truth of my opinion thereupon is (still) >> there, and to form thousand of such provisional or ungrounded - and often >> wrong - beliefs is probably inevitable. But would I claim that such >> circumstances are a philosophical necessity or of ethical relevance? >> Obviously not... > > Poor analogy. We know that cats exist and that states such as sleeping > exist. We know no such things about gods or that putative states. What I mean there is that while it is perfectly normal in everyday life to believe things without any material evidence thereof (the existence of cats and sleep does not tell me anything about the current state of my cat any more than the existence of number 27 on the roulette does not provide any ground for my belief that this is the number which is going to win, and therefore on which I should bet, at the next throw of the ball), what is abnormal is to claim that such assumptions are a philosophical necessity or of ethical relevance. -- Stefano Vaj _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat It is quite different to say "I am convinced there is no God" than it is to say "I am not convinced there is a God" There is no evidence disproving the existence of God so to believe there is no god is indeed a faith in itself. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Jan 7 01:48:35 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 7 Jan 2010 12:18:35 +1030 Subject: [ExI] atheism In-Reply-To: <504420.54703.qm@web81603.mail.mud.yahoo.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> <504420.54703.qm@web81603.mail.mud.yahoo.com> Message-ID: <710b78fc1001061748ieccfbfyd4a3393fff282542@mail.gmail.com> 2010/1/7 Kevin Freels : > > It is quite different to say "I am convinced there is no God" than it is to > say "I am not convinced there is a God" > There is no evidence disproving the existence of God so to believe there is > no god is indeed a faith in itself. It is quite different to say "I am convinced there is no Flying Spagetti Monster" than it is to say "I am not convinced there is a Flying Spagetti Monster" There is no evidence disproving the existence of the Flying Spagetti Monster, so to believe there is no Flying Spagetti Monster is indeed a faith in itself. If you open your mind far enough, your brain will fall out. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From gts_2000 at yahoo.com Thu Jan 7 02:07:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 6 Jan 2010 18:07:59 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <165235.93343.qm@web36505.mail.mud.yahoo.com> --- On Wed, 1/6/10, Stathis Papaioannou wrote: >> I don't think Searle ever considered a thought experiment exactly like >> the one we created here. > > He did... You've merely re-quoted that same paragraph from that same Chalmers paper that you keep referencing. That experiment hardly compares to your much more ingenious one. :) As you point out: > He is discussing here the replacement of neurons in the > visual cortex.... But here we do something much more profound and dramatic: we replace the semantic center(s) of the brain, presumably integral to both spoken and unspoken thought. > He agrees that it is possible to make functionally identical computerised > neurons because he accepts that physics is computable. He accepts that physics is computable, and that the brain is computable, but he certainly would not agree that your p-neurons act "functionally identical" to b-neurons if we include in that definition c-neuron capability. > However, he believes that consciousness will become > decoupled from behaviour: the patient will become blind, will realise he > is blind and try to cry out, but he will hear himself saying that > everything is normal and will be powerless to do anything about it. That > would only be possible if the patient is doing his thinking with > something other than his brain... Looks to me that he does his thinking with that portion of his natural brain that still exists. Searle goes on to describe how as the experiment progresses and more microchips take the place of those remaining b-neurons, the remainder of his natural brain vanishes along with his experience. > ...he has always claimed that thinking is done with the brain and there > is no immaterial soul. Right. So perhaps Searle used some loose language in a few sentences and perhaps you misinterpreted him based on those sentences from a single paragraph taken out of context in paper written by one his critics. Better to look at his entire philosophy. >> The surgeon starts with a patient with a semantic >> deficit caused by a brain lesion in Wernicke's area. He >> replaces those damaged b-neurons with p-neurons believing >> just as you do that they will behave and function in every >> respect exactly as would have the healthy b-neurons that >> once existed there. However on my account of p-neurons, they >> do not resolve the patient's symptoms and so the surgeon >> goes back in to attempt more cures, only creating more >> semantic issues for the patient. > > Can you explain why you think the p-neurons won't be > functionally identical? You didn't reply to a fairly lengthy post of mine yesterday so perhaps you missed my answer to that question. I'll cut, paste and add to my own words... You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism, the view that experience plays no role in behavior. If you accept epiphenomenalism and reject the common and in my opinion more sensible view that experience does affect behavior then we need to discuss that philosophical problem before we can go forward. (Should we?) Speaking as one who rejects epiphenomenalism, it looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. Why? Because... 1) experience affects behavior, and 2) behavior includes neuronal behavior, and 3) experience of one's own understanding of words counts as a very important kind of experience, It follows that: Non-c-neurons in the semantic center of the brain will not behave like b-neurons. And because the p-neurons in Cram's brain in my view equal non-c-neurons, they won't behave like the b-neurons they replaced. Does that make sense to you? I hope so. This conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he would otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons throughout his entire brain until his patient appears ready for life on the streets, zombifying much or all his brain in the process. > It seems that you do believe (unlike Searle) that there is > something about neuronal behaviour that is not computable, No I don't suppose anything non-computable about them. But I do believe that mere computational representations of b-neurons, (aka p-neurons), do not equal c-neurons. > otherwise there would be nothing preventing the creation of p-neurons > that are drop-in replacements for b-neurons, guaranteed to leave > behaviour unchanged. See above re: epiphenomenalism. -gts From nymphomation at gmail.com Thu Jan 7 02:24:55 2010 From: nymphomation at gmail.com (*Nym*) Date: Thu, 7 Jan 2010 02:24:55 +0000 Subject: [ExI] atheism In-Reply-To: <710b78fc1001061748ieccfbfyd4a3393fff282542@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> <504420.54703.qm@web81603.mail.mud.yahoo.com> <710b78fc1001061748ieccfbfyd4a3393fff282542@mail.gmail.com> Message-ID: <7e1e56ce1001061824p26ce75ddmffb45b8fbec10252@mail.gmail.com> 2010/1/7 Emlyn : > 2010/1/7 Kevin Freels : >> >> It is quite different to say "I am convinced there is no God" than it is to >> say "I am not convinced there is a God" >> There is no evidence disproving the existence of God so to believe there is >> no god is indeed a faith in itself. > > It is quite different to say "I am convinced there is no Flying > Spagetti Monster" than it is to say "I am not convinced there is a > Flying Spagetti Monster" > There is no evidence disproving the existence of the Flying Spagetti > Monster, so to believe there is no Flying Spagetti Monster is indeed a > faith in itself. I don't believe in the Flying Spaghetti Monster is much easier to defend than I believe there's no Flying Spaghetti Monster. Perhaps christians/pastafarians are framing remarks to trap atheists into having to backtrack in debates? Well, the ones who can actually spell and construct sentences at least. Just a thought.. Heavy splashings, Thee Nymphomation 'If you cannot afford an executioner, a duty executioner will be appointed to you free of charge by the court' From emlynoregan at gmail.com Thu Jan 7 03:28:19 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 7 Jan 2010 13:58:19 +1030 Subject: [ExI] atheism In-Reply-To: <7e1e56ce1001061824p26ce75ddmffb45b8fbec10252@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> <504420.54703.qm@web81603.mail.mud.yahoo.com> <710b78fc1001061748ieccfbfyd4a3393fff282542@mail.gmail.com> <7e1e56ce1001061824p26ce75ddmffb45b8fbec10252@mail.gmail.com> Message-ID: <710b78fc1001061928t4b2038d0k245690f7eed1d0c7@mail.gmail.com> 2010/1/7 *Nym* : > 2010/1/7 Emlyn : >> 2010/1/7 Kevin Freels : >>> >>> It is quite different to say "I am convinced there is no God" than it is to >>> say "I am not convinced there is a God" >>> There is no evidence disproving the existence of God so to believe there is >>> no god is indeed a faith in itself. >> >> It is quite different to say "I am convinced there is no Flying >> Spagetti Monster" than it is to say "I am not convinced there is a >> Flying Spagetti Monster" >> There is no evidence disproving the existence of the Flying Spagetti >> Monster, so to believe there is no Flying Spagetti Monster is indeed a >> faith in itself. > > I don't believe in the Flying Spaghetti Monster is much easier to > defend than I believe there's no Flying Spaghetti Monster. Perhaps > christians/pastafarians are framing remarks to trap atheists into > having to backtrack in debates? Well, the ones who can actually spell > and construct sentences at least. > There is certainly a lot of abuse of the word "belief". Belief in something or in the lack of something doesn't automatically make you religious. There's not 100% certainty of much anything in this world, but if I say that I believe the sun will rise tomorrow, which I do believe and am saying, that doesn't make me equivalently irrational to someone who says they believe in the judeo christan god. Samantha's words were "belief regardless of evidence or argument", and largely everyone has focussed on "evidence" and ignored "argument". There's no evidence that the Sun will rise tomorrow (we can't know the future), without the assumption, the belief in the argument, that the past can be used to predict the future according to certain rules (which might in turn rest on belief in the usefulness of the laws of logic). It's ok to believe stuff, if you have a supportable reason to. Belief that there is no god, much less no FSM, is supportable (occam's razor basically) in a way that belief in any particular deity + system of worship is not. These two uses of the word "belief" are of a different class. Perhaps you could say "supportable belief" and "unsupportable belief". -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From femmechakra at yahoo.ca Thu Jan 7 03:20:23 2010 From: femmechakra at yahoo.ca (Anna Taylor) Date: Wed, 6 Jan 2010 19:20:23 -0800 (PST) Subject: [ExI] Psi (but read it before you don't read it) Message-ID: <92320.5151.qm@web110414.mail.gq1.yahoo.com> JOSHUA JOB nanite1018 at gmail.com? wrote: >>I reject all types of this stuff because there is no conceivable way in >>which they could operate. That's absolutely true.? The thing with "Psi" is that it's not mathematically possible and not yet computable.? Rejecting it is like saying that it will never be possible.? I'm surprised every time a bright mind rejects something that is all about imagination as if any scientific accomplishment was solely based on the calculations as opposed to the knowledge behind it.? >>The information you would have to analyze in order to gain any >>information about things in the future, or in other regions, or that is >>actually happening in someones mind would require a sensitivity and >>processing capacity so far beyond what our minds are capable of it is >>astonishing. It could be that it's beyond your capability.? Are you saying that people that study human behaviour don't have a better grasp of personality types, racial differences and/or norm behaviours? To analyze any abundance amount of data alone could take one person an eternity or lifetime and could still never amount to anything but that doesn't mean that it's not possible especially when cases are surpassing the scientific refusal.? ? ??? >>Our senses are very limited in terms of their acuity. Some people's senses are very strong.? They may not be exact but they are much more accurate than they used to be:)? >>Since I reject the very concept of an extra-body soul as meaningless and >>without ground in empirical evidence, I can see no way that any of this >>stuff could actually be real. I'm not sure who you are to reject anything as I have no idea what you have done, said or written but I'm pretty sure I can find a numerous amount of people that can describe an extra-body "soul" (or otherwise "outer", "miracle", "relevance not known" and/or "feeling") experience.? >>Telepathy or empathy are likely a result of a Sherlock Holmesian >>attention to detail and an extremely good sense of body language >>analysis, etc. Listening plays a huge role within telepathy and feelings play a huge role in empathy.???Telepathy as defined is ignorant.? To really believe that someone can solely humanly "hear someones mental thoughts" is rather childish but when you actually take the time to listen you can learn a great abundance of what people think.? >>Other things, like precognition, are meaningless and are certainly the >>result of a combination of practical psychology (predicting what people >>will do based on knowledge about them), and of course luck and chance. I don't really believe that anyone can see "the Future".? If you are so closed off about the idea of practical psychology and how it can "uffect" then you just don't see anything grander than your limited calculations.? That's too bad because to analyze is great but "out of the box ideas or experiences", imagination and a creative role helps to change and make a difference.? Refusing an idea simply because no one has proved it is simply unimaginative.? I guess you like your box:) >>On a related note, the metaphysical studies (i.e. astrology, psychic, >>occult, new age; alternatively called "crap") section of my local >>Borders is now approximately equal in size to the philosophy (which is >>no longer labelled on any of the signs) and the science sections >>combined. I find that sad. I got bored with math and economics a long time ago. (I really didn't feel like taking the time to fully understand it). Thinking that maybe some people are good at some things while others are good at different things may give you a wider perspective on the whole "Psi" thing.? Anna:) __________________________________________________________________ Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now http://ca.toolbar.yahoo.com. From stathisp at gmail.com Thu Jan 7 06:52:33 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jan 2010 17:52:33 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <165235.93343.qm@web36505.mail.mud.yahoo.com> References: <165235.93343.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/7 Gordon Swobe : > --- On Wed, 1/6/10, Stathis Papaioannou wrote: > >>> I don't think Searle ever considered a thought experiment exactly like >>> the one we created here. >> >> He did... > > You've merely re-quoted that same paragraph from that same Chalmers paper that you keep referencing. That experiment hardly compares to your much more ingenious one. :) > > As you point out: > >> He is discussing here the replacement of neurons in the >> visual cortex.... > > But here we do something much more profound and dramatic: we replace the semantic center(s) of the brain, presumably integral to both spoken and unspoken thought. You can see though that it's just a special case. We could replace neurons in any part of the brain, affecting any aspect of cognition. >> He agrees that it is possible to make functionally identical computerised >> neurons because he accepts that physics is computable. > > He accepts that physics is computable, and that the brain is computable, but he certainly would not agree that your p-neurons act "functionally identical" to b-neurons if we include in that definition c-neuron capability. Functionally identical *except* for consciousness, in the same way that a philosophical zombie is functionally identical except for consciousness. All a p-neuron has to do is pass as a normal neuron as far as the b-neurons are concerned, i.e. produce the same outputs in response to the same inputs. Are you claiming that it is possible for a zombie to fool intelligent and fully conscious humans but impossible for a p-neuron to fool b-neurons? That doesn't sound plausible, but if it is the case, it simply means that there is something about the behaviour of neurons which is not computable. You can't say both that the behaviour of neurons is computable *and* that it's impossible to make p-neurons which behave like b-neurons. >> However, he believes that consciousness will become >> decoupled from behaviour: the patient will become blind, will realise he >> is blind and try to cry out, but he will hear himself saying that >> everything is normal and will be powerless to do anything about it. That >> would only be possible if the patient is doing his thinking with >> something other than his brain... > > Looks to me that he does his thinking with that portion of his natural brain that still exists. Searle goes on to describe how as the experiment progresses and more microchips take the place of those remaining b-neurons, the remainder of his natural brain vanishes along with his experience. Yes, but the problem is that the natural part of his brain is constrained to behave in the same way as if there had been no replacement, since the p-neurons send it the same output. It's impossible for the rest of the brain to behave differently. Searle seems to acknowledge this because he accepts that the patient will behave normally, i.e. will have normal motor output. However, he thinks the patient will have abnormal thoughts which he will be unable to communicate! Where do these thoughts come from, if all the b-neurons in the brain are behaving normally? They can only come from something other than the neurons. If you have another explanation, please provide it. >> ...he has always claimed that thinking is done with the brain and there >> is no immaterial soul. > > Right. So perhaps Searle used some loose language in a few sentences and perhaps you misinterpreted him based on those sentences from a single paragraph taken out of context in paper written by one his critics. Better to look at his entire philosophy. This is *serious* problem for Searle, invalidating his entire thesis that it is possible to make brain components that behave normally but lack consciousness. It simply isn't possible. I think even you are seeing this, since to avoid the problem you now seem to be suggesting that it isn't really possible to make zombie p-neurons at all. >>> The surgeon starts with a patient with a semantic >>> deficit caused by a brain lesion in Wernicke's area. He >>> replaces those damaged b-neurons with p-neurons believing >>> just as you do that they will behave and function in every >>> respect exactly as would have the healthy b-neurons that >>> once existed there. However on my account of p-neurons, they >>> do not resolve the patient's symptoms and so the surgeon >>> goes back in to attempt more cures, only creating more >>> semantic issues for the patient. >> >> Can you explain why you think the p-neurons won't be >> functionally identical? > > You didn't reply to a fairly lengthy post of mine yesterday so perhaps you missed my answer to that question. I'll cut, paste and add to my own words... > > You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism, the view that experience plays no role in behavior. > > If you accept epiphenomenalism and reject the common and in my opinion more sensible view that experience does affect behavior then we need to discuss that philosophical problem before we can go forward. (Should we?) > > Speaking as one who rejects epiphenomenalism, it looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. Why? > > Because... > > 1) experience affects behavior, and > 2) behavior includes neuronal behavior, and > 3) experience of one's own understanding of words counts as a very important kind of experience, > > It follows that: > > Non-c-neurons in the semantic center of the brain will not behave like b-neurons. And because the p-neurons in Cram's brain in my view equal non-c-neurons, they won't behave like the b-neurons they replaced. > > Does that make sense to you? I hope so. It makes sense. You are saying that the NCC affects neuronal behaviour, and the NCC is that part of neuronal behaviour that cannot be simulated by computer, since if it could you could program the p-neurons to adjust their I/O behaviour accordingly. Therefore, neurons must contain uncomputable physics in the NCC. > This conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he would otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons throughout his entire brain until his patient appears ready for life on the streets, zombifying much or all his brain in the process. > >> It seems that you do believe (unlike Searle) that there is >> something about neuronal behaviour that is not computable, > > No I don't suppose anything non-computable about them. But I do believe that mere computational representations of b-neurons, (aka p-neurons), do not equal c-neurons. There *must* be something uncomputable about the behaviour of neurons if it can't be copied well enough to make p-neurons, artificial neurons which behave exactly like b-neurons but lack the essential ingredient for consciousness. This isn't a contingent fact, it's a logical requirement. -- Stathis Papaioannou From florent.berthet at gmail.com Thu Jan 7 07:33:35 2010 From: florent.berthet at gmail.com (Florent Berthet) Date: Thu, 7 Jan 2010 08:33:35 +0100 Subject: [ExI] Psi (but read it before you don't read it) In-Reply-To: <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> Message-ID: <6d342ad71001062333i2b0ab2edi893866d1eba32a0@mail.gmail.com> The data so far : http://xkcd.com/373/ 2010/1/6 JOSHUA JOB > I saw that Damien was talking about psi. I don't know what most of you >> think about it. It is good to have a crowd that at least has an opinion on >> it one way or the other though. >> > I reject all types of this stuff because there is no conceivable way in > which they could operate. The information you would have to analyze in order > to gain any information about things in the future, or in other regions, or > that is actually happening in someone's mind would require a sensitivity and > processing capacity so far beyond what our minds are capable of it is > astonishing. Our senses are very limited in terms of their acuity. > > Since I reject the very concept of an extra-body soul as meaningless and > without ground in empirical evidence, I can see no way that any of this > stuff could actually be real. Telepathy or empathy are likely a result of a > Sherlock Holmesian attention to detail and an extremely good sense of body > language analysis, etc. Other things, like precognition, are meaningless and > are certainly the result of a combination of practical psychology > (predicting what people will do based on knowledge about them), and of > course luck and chance. > > On a related note, the metaphysical studies (i.e. astrology, psychic, > occult, new age; alternatively called "crap") section of my local Borders is > now approximately equal in size to the philosophy (which is no longer > labeled on any of the signs) and the science sections combined. I find that > sad. > > > Joshua Job > nanite1018 at gmail.com > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Thu Jan 7 08:44:54 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 7 Jan 2010 00:44:54 -0800 (PST) Subject: [ExI] Some new angle about AI Message-ID: <383075.6531.qm@web65607.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stefano Vaj > To: ExI chat list > Sent: Tue, January 5, 2010 3:11:58 AM > Subject: Re: [ExI] Some new angle about AI > > 2009/12/30 The Avantguardian : > > Well some hints are more obvious than others. ;-) > > http://www.hplusmagazine.com/articles/bio/spooky-world-quantum-biology > > http://www.ks.uiuc.edu/Research/quantum_biology/ > > It is not that I do not know the sources, Penrose in the first place. > Car engines are also made of molecules, which are made of atoms, and > ultimately are the expression of an underlying quantum reality.? What > I find unpersuasive is the theory that life, however defined, is > anything special amongst high-level chemical reactions. Well what other "high-level chemical reactions" are there to compare life to? Flames don't run away when you try to extinguish them. Motile bacteria do. ? > It may very well be the case that quantum computation is in a sense > pervasive, but again I do not see why life, however defined, would be > a special case in this respect, since I? do not see organic brains > exhibiting quantum computation features any more than, say, PCs, and I > suspect that "biological anticipations", etc., are more in the nature > of "optical artifacts" like the Intelligent Design of organisms. I think a lot of the quantum computation goes on below the conscious threshold, things so simple that most people take for granted. Things like facial recognition which happen nearly instaneously with the brain but?take standard?computers running algorithms?quite a bit of time to accomplish. Shooting billiards, playing dodgeball, writing a novel, seducing a lover, I?imagine a lot of quantum computing goes into these things. Besides you seem to totally discount the fact that brains formed the very concept of quantum mechanics and quantum computing in the first place. ? > >> There again, the theoretical issue would be simply that of executing a > >> program emulating what we execute ourselves closely enough to qualify > >> as "human-like" for arbitrary purposes, and find ways to implement it > >> in manner not making us await its responses for multiples of the > >> duration of the Universe... ;-) > > > > In order to do so, it would have to consider a superposition of?every possible > response and collapse?the ouput?"wavefunction" on the most appropriate response. > > *If* organic brains actually do some quantum computing. Now, I still > have to see any human being solving a typical quantum computing > problem with a pencil and a piece of paper... ;-) Perhaps the ability to generalize from specific observations is a quantum computation. A child needs to see no more than a few trees to start recognizing types of trees that he has never seen before as "trees" as opposed to "poles" or "towers". In?some sense the generic visual concept of "tree" might somehow be processed? as a superposition of every type of tree?from an oak to a cedar to a sequoia. ? ?Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.." - Neil Armstrong From avantguardian2020 at yahoo.com Thu Jan 7 11:10:12 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 7 Jan 2010 03:10:12 -0800 (PST) Subject: [ExI] World of Statecraft Message-ID: <293329.31522.qm@web65607.mail.ac4.yahoo.com> I notice that a lot of debate on the list take the form of?debates over sociopolitical idealogies and the relative merits of each utilizing a very limited pool of historical examples:?capitalism versus?socialism versus minarchism versus populism versus democracy versus fascism etc.?These debates seem to become very acrimonious and people seem to invest a lot of emotion in their chosen ideology on what amounts to little more than faith in the status quo. Admittedly I haven't put a lot of thought into it so it is still?a very rough idea, but it occured to me that modified MMORPGs would make a great "laboratory" of sorts to empirically compare all the possible ideologies with one another in a risk-free controlled setting. One would simply?need to eliminate computer?generated?"antagonists" and simply have the world populated by actual players with characteristics and abilities similar to any of the dozens of existing MMORPGs but more "down to earth". The players could form whatever types of?"states" that they wanted?and compete against each other for some predetermined periods time with the servers keeping track of metrics of success and failure?of the various "states"?resulting?from the aggregate behavior of the individual players. One could simulate wars and markets?and whatever else. This?way dozens of civilizations could rise and fall within the?space of a few years of real time?and the reasons for each could?be analyzed by political scientists and economists and the lessons could be applied to the real world. Admittedly this might not be as fun as scorching?hordes of computer generated orcs with magical fireballs, but?it could be funded by grant money sufficient to pay the participants some small amount of cash for their participation as "research subjects". ?Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From stefano.vaj at gmail.com Thu Jan 7 12:30:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 7 Jan 2010 13:30:41 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <383075.6531.qm@web65607.mail.ac4.yahoo.com> References: <383075.6531.qm@web65607.mail.ac4.yahoo.com> Message-ID: <580930c21001070430s4032aa2el8c63b979be8d7c@mail.gmail.com> 2010/1/7 The Avantguardian : > Well what other "high-level chemical reactions" are there to compare life to? Flames don't run away when you try to extinguish them. Motile bacteria do. How would that qualify as a quantum effect? :-/ > I think a lot of the quantum computation goes on below the conscious threshold, things so simple that most people take for granted. Things like facial recognition which happen nearly instaneously with the brain but?take standard?computers running algorithms?quite a bit of time to accomplish. Of course, organic brains have evolved to do (relatively) well what they do, but this does not tell us anything about their low-level working, nor that they would escape the Principle of Computational Equivalence as far as their... computing features are concerned (the jury may still be out on some other aspects of their working). The fact that a Motorola processor used to run Windows less efficiently than an Intel processor does not really suggest that the second is a quantum computer. And plenty of phenomena which have apparently little to do with quantum effects are more or less heavy to emulate or computationally intractable. See the weather. Or, once more, the plenty of examples discussed in a New Kind of Science... > Shooting billiards, playing dodgeball, writing a novel, seducing a lover, I?imagine a lot of quantum computing goes into these things. Why? Conversely, I am not aware of *even a single feature* of any hypothetical quantum computer which is easily emulated by organic brains. Take for instance integer factorisation. Or any other prob where quantum computing would make a difference. "Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees." (from Wikipedia). If you had a quantum computer in your head, all that should be a piece of bread, once you have learned the appropriate algorithm. It is on the contrary the case that we are *way* better at, say, additions of small integers or Boolean algebra. And, by the way, most natural organic brains have no chances whatsoever to learn how shooting billiards, playing dodgeball, writing a novel, seducing a lover, no matter how much training effort you put into it, even though their underlying principles appear pretty similar to one another... -- Stefano Vaj From gts_2000 at yahoo.com Thu Jan 7 12:51:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 7 Jan 2010 04:51:21 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <525969.17123.qm@web36505.mail.mud.yahoo.com> --- On Thu, 1/7/10, Stathis Papaioannou wrote: > It makes sense. You are saying that the NCC affects > neuronal behaviour, and the NCC is that part of neuronal > behaviour that cannot be simulated by computer Not quite. I said *experience* affects behavior, and I did not say we could not simulate the NCC on a computer. Where the NCC (neural correlates of consciousness) exist in real brains, experience exists, and the NCC correlate. (That's why the second "C" in NCC.) Think of it this way: consciousness exists in real brains in the presence of the NCC as solidity of real water exists in the presence of temperatures at or below 32 degrees Fahrenheit. You can simulate ice cubes on your computer but those simulated ice cubes won't help keep your processor from overheating. Likewise, you can simulate brains on your computer but that simulated brain won't have any real experience. In both examples, you have merely computed simulations of real things. > Therefore, [you think I mean to say that] neurons must contain > uncomputable physics in the NCC. But I don't mean that. Look again at my ice cube analogy! -gts From gts_2000 at yahoo.com Thu Jan 7 13:30:18 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 7 Jan 2010 05:30:18 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <525969.17123.qm@web36505.mail.mud.yahoo.com> Message-ID: <527984.77632.qm@web36507.mail.mud.yahoo.com> Stathis, I wrote: > Where the NCC (neural correlates of consciousness) exist in > real brains, experience exists, and the NCC correlate. > (That's why the second "C" in NCC.) I meant the first "C", of course. The NCC *correlate*. If we knew exactly what physical conditions must exist in the brain for consciousness to exist, i.e., if we knew everything about the NCC, then we could perfectly simulate those physical conditions on a computer. And someday we will do this. But that computer simulation will have only weak AI for the same reason that simulated ice cubes won't cool your computer's processor. I understand why you want to say that I must therefore think consciousness exists outside the material world, or that I think we cannot compute the brain. But that's not what I mean at all. I see consciousness as just a state that the brain can be in. We can simulate that brain-state on a computer just as we can simulate the solid state of water. -gts From painlord2k at libero.it Thu Jan 7 13:44:18 2010 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 07 Jan 2010 14:44:18 +0100 Subject: [ExI] World of Statecraft In-Reply-To: <293329.31522.qm@web65607.mail.ac4.yahoo.com> References: <293329.31522.qm@web65607.mail.ac4.yahoo.com> Message-ID: <4B45E532.9030409@libero.it> Il 07/01/2010 12.10, The Avantguardian ha scritto: > I notice that a lot of debate on the list take the form of debates > over sociopolitical idealogies and the relative merits of each > utilizing a very limited pool of historical examples: capitalism > versus socialism versus minarchism versus populism versus democracy > versus fascism etc. These debates seem to become very acrimonious > and people seem to invest a lot of emotion in their chosen ideology > on what amounts to little more than faith in the status quo. > Admittedly I haven't put a lot of thought into it so it is still a > very rough idea, but it occured to me that modified MMORPGs would > make a great "laboratory" of sorts to empirically compare all the > possible ideologies with one another in a risk-free controlled > setting. Yes, they do. For example, EVE Online is considered to have the best economy. Near any thing in the game is mined and built by the players and can be sold or bought on the open market, with contracts and with direct exchange. Remarkable is the fact that the behavior of the markets is similar to what the theory say. For example, the distribution of market hubs is near exactly what the theory say. The prices move like the theory say they would. > One would simply need to eliminate computer generated "antagonists" > and simply have the world populated by actual players with > characteristics and abilities similar to any of the dozens of > existing MMORPGs but more "down to earth". Well, in EVE, but in many other MMORG, the NPCs are not "antagonists", they are "resources" to harvest in a more or less organized way. The real antagonists and enemies are other players and other player's corporations and alliances that compete for the control/sovranity over the 0.0 security areas and their resources. This part of the game is very much as political as militaristic as economic. Huge battles are fought, groups change side, leave for greener pasture (or simply quieter ones) and enormous quantity of resources are spent or change hands or are invested. To give numbers, many battles normally could be waged by more than 100 pilots on a side and the same for the other. 200-300 pilots in the same fleet are not rare. Given the rules of the game, Madoff-style scams are OK in-game, speculations on the market are OK, economic warfare is OK, infiltrating enemy corporations and alliances to steal and destroy stuff is OK. If it is possible by the game mechanics, it is OK. > The players could form whatever types of "states" that they wanted > and compete against each other for some predetermined periods time > with the servers keeping track of metrics of success and failure of > the various "states" resulting from the aggregate behavior of the > individual players. One could simulate wars and markets and whatever > else. This way dozens of civilizations could rise and fall within the > space of a few years of real time and the reasons for each could be > analyzed by political scientists and economists and the lessons could > be applied to the real world. EVE have a real economist surveying the economy and releasing quarterly analyses of how the economy work. Dr Eyjol Gudmondsson (formerly of the University of Iceland) http://www.gamesindustry.biz/articles/star-bucks http://www.pcpro.co.uk/news/122840/virtual-world-hires-real-economist >> "As a real economist I had to spend months trying to find data to >> test an economic theory but if I was wrong, I wasn't sure if the >> theory was wrong or the data was wrong. At least here I know the >> data is right," Guodmundsson said. >> >> As new players join, CCP adds new planets and asteroids that can be >> exploited, one of several "faucets" that serve to inject funds into >> the universe and keep the economy ticking. >> >> "After we opened up an area where there was more zydrine (an >> in-game mineral), we saw that price dropped. We did not announce >> that there was more explicitly, but in a matter of days the price >> had adjusted," Guodmundsson said. > Admittedly this might not be as fun as scorching hordes of computer > generated orcs with magical fireballs, but it could be funded by > grant money sufficient to pay the participants some small amount of > cash for their participation as "research subjects". Scorching CG orcs with fireballs is boring. Scorching human generated adversaries in many ways is funnier. The point, like in EVE, is having all the players in the same shared world. Not separated instances. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.725 / Database dei virus: 270.14.129/2605 - Data di rilascio: 01/07/10 08:35:00 From stathisp at gmail.com Thu Jan 7 14:03:59 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 01:03:59 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <525969.17123.qm@web36505.mail.mud.yahoo.com> References: <525969.17123.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/7 Gordon Swobe : > --- On Thu, 1/7/10, Stathis Papaioannou wrote: > >> It makes sense. You are saying that the NCC affects >> neuronal behaviour, and the NCC is that part of neuronal >> behaviour that cannot be simulated by computer > > Not quite. I said *experience* affects behavior, and I did not say we could not simulate the NCC on a computer. "Experience" can only affect behaviour by moving stuff. How does the stuff get moved? What would have to happen is something like this: the NCC molecule attaches to certain ion channels, changing their conformation and thereby allowing an influx of sodium ions, depolarising the cell membrane; and this event constitutes a little piece of experience. So while you claim "experience" cannot be simulated, you allow that the physical events associated with the experience can be simulated, which means every aspect of the neuron's behaviour can be simulated. > Where the NCC (neural correlates of consciousness) exist in real brains, experience exists, and the NCC correlate. (That's why the second "C" in NCC.) > > Think of it this way: consciousness exists in real brains in the presence of the NCC as solidity of real water exists in the presence of temperatures at or below 32 degrees Fahrenheit. > > You can simulate ice cubes on your computer but those simulated ice cubes won't help keep your processor from overheating. Likewise, you can simulate brains on your computer but that simulated brain won't have any real experience. In both examples, you have merely computed simulations of real things. If you want the computer to interact with the world you have to attach it to I/O devices which are not themselves computers. For example, the computer could be attached to a peltier device in order to simulate the cooling effect that an ice cube would have on the processor. >> Therefore, [you think I mean to say that] neurons must contain >> uncomputable physics in the NCC. > > But I don't mean that. Look again at my ice cube analogy! The question of whether it is possible to put a computer in a neuron suit so that its behaviour is, to other neurons, indistinguishable from a natural neuron is equivalent to the question of whether a robot can impersonate a human well enough so that other humans can't tell that it's a robot. I know you believe the robot human would lack intentionality, but you have (I think) agreed that despite this handicap it would be able to pass the TT, pretend to have emotions, and so on, as it would have to do in order to qualify as a philosophical zombie. So are you now saying that while a zombie robot human presents no theoretical problem, a zombie robot neuron, which after all only needs to reproduce much simpler behaviour and only needs to fool other neurons, would be impossible? -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 7 14:10:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 7 Jan 2010 06:10:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <446942.27612.qm@web36504.mail.mud.yahoo.com> --- On Thu, 1/7/10, Stathis Papaioannou wrote: > There *must* be something uncomputable about the behaviour of neurons... No. >... if it can't be copied well enough to make p-neurons, > artificial neurons which behave exactly like b-neurons but lack the > essential ingredient for consciousness. This isn't a contingent fact, > it's a logical requirement. Yes and now you see why I claim Cram's surgeon must go in repeatedly to patch the software until his patient passes the Turing test: because the patient has no experience, the surgeon must keep working to meet your logical requirements. The surgeon finally gets it right with Service Pack 9076. Too bad his patient can't know it. -gts From painlord2k at libero.it Thu Jan 7 14:19:25 2010 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 07 Jan 2010 15:19:25 +0100 Subject: [ExI] World of Statecraft In-Reply-To: <293329.31522.qm@web65607.mail.ac4.yahoo.com> References: <293329.31522.qm@web65607.mail.ac4.yahoo.com> Message-ID: <4B45ED6D.5080102@libero.it> Il 07/01/2010 12.10, The Avantguardian ha scritto: > I notice that a lot of debate on the list take the form of debates over sociopolitical idealogies and the relative merits of each utilizing a very limited pool of historical examples: capitalism versus socialism versus minarchism versus populism versus democracy versus fascism etc. These debates seem to become very acrimonious and people seem to invest a lot of emotion in their chosen ideology on what amounts to little more than faith in the status quo. > > Admittedly I haven't put a lot of thought into it so it is still a very rough idea, but it occured to me that modified MMORPGs would make a great "laboratory" of sorts to empirically compare all the possible ideologies with one another in a risk-free controlled setting. One would simply need to eliminate computer generated "antagonists" and simply have the world populated by actual players with characteristics and abilities similar to any of the dozens of existing MMORPGs but more "down to earth". The players could form whatever types of "states" that they wanted and compete against each other for some predetermined periods time with the servers keeping track of metrics of success and failure of the various "states" resulting from the aggregate behavior of the individual players. One could simulate wars and markets and whatever else. This way dozens of civilizations could rise and fall within the space of a few years of real time and the > reasons for each could be analyzed by political scientists and economists and the lessons could be applied to the real world. Admittedly this might not be as fun as scorching hordes of computer generated orcs with magical fireballs, but it could be funded by grant money sufficient to pay the participants some small amount of cash for their participation as "research subjects". By the way I found this: Virtual Economy Research Network http://virtual-economy.org/ Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.725 / Database dei virus: 270.14.129/2605 - Data di rilascio: 01/07/10 08:35:00 From stathisp at gmail.com Thu Jan 7 14:40:52 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 01:40:52 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/7 Aware : > As I've said already (three times in this thread) it seems that > everyone here (and Searle) would agree with the functionalist > position: that perfect copies must be identical, and thus > functionalism needs no defense. The functionalist position is that a different machine performing the same function would produce the same mind. Searle and everyone on this list does not agree with this, nor to be fair is it trivially obvious. > Stathis continues to argue on the basis of functional identity, since > he doesn't seem to see how there could be anything more to the > question. [I know Stathis had a copy of Hofstadter's _I AM A STRANGE > LOOP_, but I suspect he didn't finish it.] I got to chapter 11, as it happens, and I did mean to finish it but still haven't. I agree with Hofstdter's, and your, epiphenomenalism. I usually only contribute to this list when I *disagree* with what someone says and feel that I have a significant argument to present against it. I'm better at criticising and destroying than praising and creating, I suppose. The argument with Gordon does not involve proposing or defending any theory of consciousness, but simply looks at the consequences of the idea that it is possible for a machine to reproduce behaviour but not thereby necessarily reproduce the original consciousness. It's not immediately obvious that this is a silly idea, and a majority of people probably believe it. However, it can be shown to be internally inconsistent, and without invoking any assumptions other than that consciousness is a naturalistic phenomenon. -- Stathis Papaioannou From jonkc at bellsouth.net Thu Jan 7 15:24:59 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 7 Jan 2010 10:24:59 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> Message-ID: On Jan 6, 2010, at 1:20 PM, x at extropica.org wrote: Me: >> we learned from the history of Evolution that consciousness is easy but >> intelligence is hard. > > So why don't you agree with me that intelligence must have "existed" > (been recognizable, if there had been an observer) for quite a long > time Because we learned from the history of Evolution that consciousness is easy but intelligence is hard. > before evolutionary processes stumbled upon the additional, > supervisory, hack of self-awareness What you just said is logically absurd. If consciousness doesn't effect intelligence then there is no way Evolution could have "stumbled upon" the trick of generating consciousness because it would convey no more adaptive advantage than eyes or pigment does for creatures that live all their life in dark caves. In short if even one conscious being exists on Planet Earth and if Evolution is true then the Turing Test works; and if the Turing Test doesn't work then neither does Evolution. > novel hacks like self-awareness discovered at some point, exploited > for the additional fitness they confer Fine, if true and consciousness aids fitness then it can be deduced from behavior. Either you can have intelligence without consciousness or you can not. The propositions lead to mutually contradictory conclusions, they can't both be right and you can't claim both as your own. You've got to make a choice. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Jan 7 16:04:21 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 7 Jan 2010 11:04:21 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <527984.77632.qm@web36507.mail.mud.yahoo.com> References: <527984.77632.qm@web36507.mail.mud.yahoo.com> Message-ID: <187B6A0A-A8FD-42EB-BC18-A2178641FC72@bellsouth.net> On Jan 7, 2010, Gordon Swobe wrote: > > If we knew exactly what physical conditions must exist in the brain for consciousness to exist, i.e., if we knew everything about the NCC, This NCC of yours is gibberish. You state very specifically that it is not the signals between neurons that produce consciousness, so how can some sort of magical awareness inside the neuron correlate with anything? You must have 100 billion independent conscious entities inside your head. > then we could perfectly simulate those physical conditions on a computer. And someday we will do this. Glad to hear it. > But that computer simulation will have only weak AI So even physical perfection is not enough for consciousness, something must still be missing. Let's see if we can deduce some of the properties of that something. Well first of all obviously it's non-physical, also it can't be detected by the Scientific Method, it can't be produced by Darwin's Theory of Evolution, and it starts with the letter "S". John K Clark > for the same reason that simulated ice cubes won't cool your computer's processor. > > I understand why you want to say that I must therefore think consciousness exists outside the material world, or that I think we cannot compute the brain. But that's not what I mean at all. I see consciousness as just a state that the brain can be in. We can simulate that brain-state on a computer just as we can simulate the solid state of water. > > -gts > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Jan 7 15:38:51 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 7 Jan 2010 10:38:51 -0500 Subject: [ExI] Psi (no need to read this post you already know what it says ) In-Reply-To: <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> Message-ID: On Jan 6, 2010, JOSHUA JOB wrote: > I reject all types of this stuff because there is no conceivable way in which they could operate. I reject Psi too but not for that reason, I reject it because there is not one particle of credible evidence that the fucking phenomenon exists. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Thu Jan 7 16:28:56 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 08:28:56 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Thu, Jan 7, 2010 at 6:40 AM, Stathis Papaioannou wrote: > 2010/1/7 Aware : > >> ... it seems that everyone here (and Searle) would agree with >> the functionalist position: that perfect copies must be identical, >> and thus functionalism needs no defense. > > The functionalist position is that a different machine performing the > same function would produce the same mind. Searle and everyone on this > list does not agree with this, nor to be fair is it trivially obvious. You say "different machine"; I would say "different substrate", but no matter. We in this discussion, including Gordon, plus Searle, are of a level of sophistication that none of us believes in a "soul in the machine". Most people in this forum (and other tech/geek forums) have gotten to that level of sophistication, where they can proudly enjoy looking down from their improved point of view and smugly denounce those who don't, while remaining blind to levels of meaning still higher and more encompassing. > >> Stathis continues to argue on the basis of functional identity, since >> he doesn't seem to see how there could be anything more to the >> question. [I know Stathis had a copy of Hofstadter's _I AM A STRANGE >> LOOP_, but I suspect he didn't finish it.] > > I got to chapter 11, as it happens, and I did mean to finish it but > still haven't. I didn't finish it either. I found it very disappointing (my expectations set by GEB) in its self-indulgence and its lack of any substantial new insight. However, it may be useful for some not already familiar with its ideas. > I agree with Hofstdter's, and your, epiphenomenalism. But it's not most people's idea of epiphenomenalism, where the "consciousness" they know automagically emerges from a system of sufficient complexity and configuration. Rather, its an epistemological understanding of the (recursive) relationship between the observer and the observed. > I usually only contribute to this list when I *disagree* with what > someone says and feel that I have a significant argument to present > against it. I'm better at criticising and destroying than praising and > creating, I suppose. It's always easier to criticize, but creating tends to be more rewarding. Praising tends to fall by the wayside among us INTJs. > The argument with Gordon does not involve > proposing or defending any theory of consciousness, Here I must disagree... > but simply looks > at the consequences of the idea that it is possible for a machine to > reproduce behaviour but not thereby necessarily reproduce the original > consciousness. Your insistence that it is this simple is prolonging the cycling of that "strange loop" you're in with Gordon. It's not always clear what Gordon's argument IS--often he seems to be parroting positions he finds on the Internet--but to the extent he is arguing for Searle, he is not arguing against functionalism. Given functionalism, and the "indisputable 1st person evidence" of the existence of consciousness/qualia/meaning/intensionality within the system ("where else could it be?"), he points out quite correctly that no matter how closely one looks, no matter how subtle one's formal description might be, there's syntax but no semantics in the system. So I suggest (again) to you and Gordon, and Searle. that you need to broaden your context. That there is no essential consciousness in the system, but in the recursive relation between the observer and the observed. Even (or especially) when the observer and observed are functions of he same brain, you get self-awareness entailing the reported experience of consciousness, which is just as good because it's all you ever really had. > It's not immediately obvious that this is a silly idea, > and a majority of people probably believe it. Your faith in functionalism is certainly a step up from the assumptions of the silly masses. But everyone in this discussion, and most denizens of the Extropy list, already get this. > However, it can be shown > to be internally inconsistent, and without invoking any assumptions > other than that consciousness is a naturalistic phenomenon. Yes, but that's not the crux of this disagreement. In fact, there is no crux of this disagreement since to resolve it is not to show what's wrong within, but to reframe it in terms of a larger context. Searle and Gordon aren't saying that machine consciousness isn't possible. If you pay attention you'll see that once in a while they'll come right out and say this, at which point you think they've expressed an inconsistency. They're saying that even though it's obvious that some machines (e.g. humans) do have consciousness, it's also clear that no formal system implements semantics. And they're correct. That's why this, and the perennial personal-identity debates tend to be so intractable: It's like the man looking for the car keys he dropped somewhere in the dark, but looking only around the lamppost, for the obvious reason that that's the only place he can see. Enlarge the context. - Jef From spike66 at att.net Thu Jan 7 16:33:37 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 08:33:37 -0800 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> Message-ID: <79471C131D7F4EE28EE05A725ED29AED@spike> ...On Behalf Of John Clark ... >I reject Psi too but not for that reason, I reject it because there is not one particle of credible evidence that the fucking phenomenon exists...John K Clark John I can assure you that the fucking phenomenon exists. But what has that to do with Psi? I don't see how the two are related in any way. spike From kanzure at gmail.com Thu Jan 7 17:27:08 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 7 Jan 2010 11:27:08 -0600 Subject: [ExI] Fwd: [neuro] Daily Mail on Markram In-Reply-To: <20100107160023.GC17686@leitl.org> References: <20100107160023.GC17686@leitl.org> Message-ID: <55ad6af71001070927ja5080aar149cf51a7e5ce4f8@mail.gmail.com> ---------- Forwarded message ---------- From: Eugen Leitl Date: Thu, Jan 7, 2010 at 10:00 AM Subject: [neuro] Daily Mail on Markram To: tt at postbiota.org, info at postbiota.org, neuro at postbiota.org (aargh, I guess) http://www.dailymail.co.uk/sciencetech/article-1240410/The-real-Frankenstein-experiment-One-mans-mission-create-living-mind-inside-machine.html?printingPage=true The real Frankenstein experiment: One man's mission to create a living mind inside a machine By Michael Hanlon Last updated at 8:30 AM on 04th January 2010 Professor Markram is planning to create the world's most expensive 'baby' His words staggered the erudite audience gathered at a technology conference in Oxford last summer. Professor Henry Markram, a doctor-turned-computer engineer, announced that his team would create the world's first artificial conscious and intelligent mind by 2018. And that is exactly what he is doing. On the shore of Lake Geneva, this brilliant, eccentric scientist is building an artificial mind. A Swiss - it could only be Swiss - precision- engineered mind, made of silicon, gold and copper. The end result will be a creature, if we can call it that, which its maker believes within a decade may be able to think, feel and even fall in love. Professor Markram's 'Blue Brain' project, must rank as one of the most extraordinary endeavours in scientific history. If this 47-year-old South-African Israeli is successful, then we are on the verge of realising an age-old fantasy, one first imagined when an adolescent Mary Shelley penned Frankenstein, her tale of an artificial monster brought to life - a story written, quite coincidentally, just a few miles from where this extraordinary experiment is now taking place. Success will bring with it philosophical, moral and ethical conundrums of the highest order, and may force us to confront what it means to be human. But Professor Markram thinks his artificial mind will render vivisection obsolete, conquer insanity and even improve our intelligence and ability to learn. What Markram's project amounts to is an audacious attempt to build a computerised copy of a brain - starting with a rat's brain, then progressing to a human brain - inside one of the world's most powerful computers. This, it is hoped, will bring into being a sentient mind that will be able to think, reason, express will, lay down memories and perhaps even experience love, anger, sadness, pain and joy. 'We will do it by 2018,' says the professor confidently. 'We need a lot of money, but I am getting it. There are few scientists in the world with the resources I have at my disposal.' There is, inevitably, scepticism. But even Markram's critics mostly accept that he is on to something and, most importantly, that he has the money. Tens of millions of euros are flooding into his laboratory at the Brain Mind Institute at the Ecole Polytechnique in Lausanne - paymasters include the Swiss government, the EU and private backers, including the computer giant IBM. Artificial minds are, it seems, big business. The human brain is the most complex object in the universe. But Markram insists that the latest supercomputers will soon have its measure. ?Professor Markram believes that if his 'Blue Brain' project is successful, it will render vivisection obsolete Professor Markram believes that if his 'Blue Brain' project is successful, it will render vivisection obsolete As I toured his glittering laboratories, it became clear that this is certainly no ordinary scientific endeavour. In fact, Markram's department looks like the interior of the Starship Enterprise, and is full of toys that would make James Bond's Q blush with envy. But how on earth do you build a brain in a computer? And haven't scientists been trying to do that build an electronic brain for decades - without success? To understand the sheer importance of what Blue Brain is, it is helpful to understand, first, what it is not. Dr Markram is not trying to build the kind of clanking robot servant beloved of countless sci-fi movies. Real robots may be able to walk and talk and are based around computers that are 'taught' to behave like humans, but they are, in the end, no more intelligent than dishwashers. Markram dismisses these toys as 'archaic'. The professor is not mad, but he is unsettling Instead, Markram is building what he hopes will be a real person, or at least the most important and complex part of a real person - its mind. And so instead of trying to copy what a brain does, by teaching a computer to play chess, climb stairs and so on, he has started at the bottom, with the biological brain itself. Our brains are full of nerve cells called neurons, which communicate with one another using minuscule electrical impulses. The project literally takes apart actual brains cell by cell, using what amounts to extremely intricate dissecting techniques, analyses the billions of connections between the cells, and then plots these connections into a computer. The upshot is, in effect, a blueprint or carbon copy of a brain, rendered in software rather than flesh and blood. The idea is that by building a model of a real brain, it might - just might - begin to behave like the real thing. To demonstrate how he is achieving this, Markram shows me a machine that resembles an infernal torture engine; a wheel about 2ft across with a dozen ultra-fine glass 'spokes' aimed at the centre. It is here that tiny slivers of rat brain are dissected, using tools finer than a human hair. Their interconnections are then mapped and turned into computer code. ?Professor Markram is adamant the experiment will not result in a stereotypical Frankenstein, like the one seen here in 1970 film The Horror of Frankenstein Professor Markram is adamant the experiment will not result in a stereotypical Frankenstein, like the one seen here in 1970 film The Horror of Frankenstein A bucket full of slop lies next to the gleaming high-techery. That's where the bits of old rat brain go - a gruesome reminder that amid this is a project based upon flesh and blood. So far, Markram's supercomputer - an IBM Blue Gene - is able, using the information gleaned from the slivers of real brain tissue, to simulate the workings of about 10,000 neurones, amounting to a single rat's 'neocortical column' - the part of a brain believed to be the centre of conscious thought. That, says Markram, is the hard part. To go further, he is going to need a bigger computer. Using just 30 watts of electricity - enough to power a dim light bulb - our brains can outperform by a factor of a million or more even the mighty Blue Gene computer. But replicating a whole real brain is 'entirely impossible today', Markram says. Even the next stage - a complete rat brain - needs a ?200million, vastly more efficient supercomputer. Then what? 'We need a billion-dollar machine, custom-built. That could do a human brain.' But computing power is increasing exponentially and it is only a matter of time before suitable hardware is available. 'We will get there,' says Markram confidently. In fact, he believes that he will have a computer sufficiently powerful to deal with all the data and simulate a human brain before the end of this decade. The result? Perhaps a mind, a conscious, sentient being, able to learn and make autonomous decisions. It is a startling possibility. When faced with such extraordinary claims, one must first ask the question: 'Is he mad?' I have met several scientists who maintain they can change the world: men (they are always men) who say they can build a time machine or a starship, cure cancer or old age. A glittering machine brain, perhaps many times more intelligent than our own, carries, perhaps, even more potential for evil, as well as good Men who believe telepathy is real, or that Earth has been visited by aliens or, indeed, or who claim they are on the verge of creating artificial minds. Most of these men are deluded. Markram is not mad, but he is certainly unsettling. He comes across like a combination of Victorian gentleman scientist and New Age guru. 'You have to understand physics, the structure of the universe and philosophy,' he says, theatrically. He talks about humans 'not reaching their potential', and of his conviction that more of us have the capacity for genius than we think. He believes his artificial mind could show us how to exploit the untapped potential in our own minds. If we create a being more intelligent than us, maybe it could teach us how to catch up. The best evidence that Markram is not crazy is that he gets his hands dirty. He knows his way around his machines and knows one end of a brain cell from another. The principles underlying his work are firmly rooted in the scientific mainstream. ?Professor Markram is hoping the artificial brain he will create could be used for medical research, but concedes this could cause ethical problems Professor Markram is hoping the artificial brain he will create could be used for medical research, but concedes this could cause ethical problems He believes that the deepest and most fundamental properties of being human - thoughts, emotions, the mysterious feeling of self-awareness - arise from trillions of electrochemical interactions that take place in the lump of grey jelly in our heads. He believes there is no mysterious 'soul' that gives rise to the feeling of self. On the contrary, he insists that this results from physical processes inside our skulls. Of course, consciousness is one of the deepest scientific mysteries. How do millions of tiny electrical impulses in our heads give rise to the feeling of self, of pain, of love? No one knows. But if Markram is right, this doesn't matter. He believes that consciousness is probably something that simply 'emerges' given a sufficient degree of organised complexity. Imagine it this way: think of the marvellous patterns that emerge when a flock of starlings swoops in unison at dusk. Thousands of birds are interacting to create a shape that resembles a single unified entity with a life of its own. Markram believes that this is how consciousness might emerge - from billions of separate brain cells combining to create a single sentient mind. But what of the problems such an invention could generate? What if the machine makes demands? What if it begs you not to turn it off, or leave it alone at night? 'Maybe you will have to treat it like a child. Sometimes I will have to say to my child: "I have to go, sorry," ' he explains. Indeed, the artificial brain would throw up a host of moral issues. Could you really use an artificial mind, which behaves like a real mind, to perform experiments oa human mind. Dr David Lester, one of the project's lead scientists, says that they are effectively in a race with Markram, a race they will have to win with cunning rather than cash. 'We've got ?4million,' Lester says. 'Blue Brain has serious funding from the Swiss government and IBM. Henry Markram is to be taken seriously.' 'The process of building this is going to change society. We will have ethical problems that are unimaginable to us' Manchester is hoping it is possible to simplify key elements of the brain and thus dramatically reduce the computation power needed to replicate them. Others doubt Markham can ever succeed. Imperial College professor Igor Aleksander claims that while Markram can build a copy of a human brain, it will be 'like an empty bucket', incapable of consciousness. And, as Dr Lester points out, 'a newly minted real human brain can't do very much except lie on the floor and gurgle'. Indeed, Professor Markram may end up creating the world's most expensive baby. But if Markram turns his machine on in 2018, and it utters the famous declaration that underpins Western philosophy, 'I think, therefore I am', he will have confounded his critics. And his ambition is by no means impossible. In the past year, models of a rat brain produced totally unexpected 'brainwave patterns' in the computer software. Is it possible that, for a few seconds maybe, a fleeting rat-like consciousness emerged? 'Perhaps,' Markram says. It is not much, but if a rat, then why not a man? During my meeting I tried to avoid bringing up the name of the most famous (fictional) creator of artificial life, on the grounds of taste. But in the end, I had to mention him. 'Yes, well, Dr Frankenstein. People have made that point,' Markram says with a thin smile. Frankenstein's experiment, of course, went rather horribly wrong. And that was one man, with his monster made from bits of old corpse. A glittering machine brain, perhaps many times more intelligent than our own and created by one of the best-equipped laboratories in the world, carries, perhaps, even more potential for evil, as well as good. _______________________________________________ neuro mailing list neuro at postbiota.org http://postbiota.org/mailman/listinfo/neuro -- - Bryan http://heybryan.org/ 1 512 203 0507 From thespike at satx.rr.com Thu Jan 7 17:34:29 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 11:34:29 -0600 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <79471C131D7F4EE28EE05A725ED29AED@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> Message-ID: <4B461B25.6070405@satx.rr.com> On 1/7/2010 10:33 AM, spike wrote: > John I can assure you that the fucking phenomenon exists. But what has that > to do with Psi? I don't see how the two are related in any way. He's getting confused with the heavy Sigh phenomenon. Damien Broderick From aware at awareresearch.com Thu Jan 7 17:54:34 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 09:54:34 -0800 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> Message-ID: 2010/1/7 John Clark : > On Jan 6, 2010, at 1:20 PM, x at extropica.org wrote: > Me: > >>> we learned from the history of Evolution that consciousness is easy but >>> intelligence is hard. > >> So why don't you agree with me that intelligence must have "existed" >> (been recognizable, if there had been an observer) for quite a long >> time > > Because we learned from the history of Evolution that consciousness is easy > but > intelligence is hard. Well, that response clearly adds nothing to the discussion, and you stripped out my supporting text. >> before evolutionary processes stumbled upon the additional, >> supervisory, hack of self-awareness > > What you just said is logically absurd. Really? Given your experience of my thinking over several years together on this list, do you think it's more likely that I'm simply a generator of logical absurdities, or is it more likely that you don't understand the basis of my statement? [I note that you're not asking for any clarification.] > If consciousness doesn't effect intelligence Do you mean literally "If consciousness doesn't produce intelligence" or do you mean "If consciousness doesn't affect intelligence"? If you mean literally the former, then it appears that you must harbor a mystical notion of "consciousness", that contributes to the somewhat "intelligent" behavior of the amoeba, despite its apparent lack of the neuronal apparatus necessary to support a sense of self. I know John Clark doesn't tolerate mysticism, and I know Damien has already your use of the word "effect", so I can only guess that you mean "consciousness" in a way that doesn't require much, if any, hardware support. [I'll note here that in addition to stripping out substantial portions of my supporting text, you've also eliminated my careful definitions of what I meant when I used the words "consciousness" and "intelligence."] > then there is no way Evolution could have "stumbled upon" the > trick of generating consciousness It may be relevant that the way evolution (It's not clear why you would capitalize that word) works is always in terms of blind, stumbling, random variation. Of course genetic variation is strongly constrained, and phenotypic variation is strongly facilitated by preexisting structures. > because it would convey no more adaptive > advantage than eyes or pigment does for creatures that live all their life > in dark caves. There appears to be such a strong disconnect here that I suspect we're not even talking about the same things. It seems obvious that given a particular degree of adaptation of an organism to its environment, then to the extent the organism's fitness would be enhanced by the ability to model possible variations on itself acting within its environment, especially if this facilitates cooperative behaviors with others similar to itself, such "adaptive advantage" would tend to be selected. What do YOU mean? > In short if even one conscious being exists on Planet Earth > and if Evolution is true then the Turing Test works; Huh? If there were only one conscious being, then wouldn't that have to be the one judging the Turing Test? And if there is no other conscious being, how could any (non-conscious by definition) subject pass the test (such that the TT would be shown to "work"? > and if the Turing Test > doesn't work then neither does Evolution. Huh?? >> novel hacks like self-awareness discovered at some point, exploited >> for the additional fitness they confer > > Fine, if true and consciousness aids fitness then it can be deduced from > behavior. Well, not "deduced" but certainly inferred... > Either you can have intelligence without consciousness or you can > not. The propositions I'm going to assume, since you emphasize "Evolution", that your propositions should be stated in terms of "evolved organisms", and not in terms of the more general "systems that display behavior assessed as intelligent." So, (A) Evolved organisms can be correctly assessed as displaying intelligence but without consciousness. (B) Evolved organisms can be correctly assessed as displaying intelligence along with consciousness. > lead to mutually contradictory conclusions, they can't > both be right and you can't claim both as your own. You've got to make a > choice. Why? It seems to me that we observe the existence of both classes of evolved organisms. As I've said before, a wide range of organisms can display behaviors expressing a significant degree of effective prediction and control in regard to their environment of adaptation. And the addition of a layer of supervisory self-awareness can be a beneficial add-on for more advanced environments of interaction. I'm guessing that our disagreement here comes down to different usage and meaning of the terms "intelligence" and "consciousness" and it might be significant that you stripped out all evidence and results of my efforts to effectively define them. You seem not to play fair, so it's not much fun. - Jef From ismirth at gmail.com Thu Jan 7 17:55:48 2010 From: ismirth at gmail.com (Isabelle Hakala) Date: Thu, 7 Jan 2010 12:55:48 -0500 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <4B461B25.6070405@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> Message-ID: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> First off, what was the impetus for this topic being started? Secondly, to the person who started it, how do you define psi (so that we may have a common language for the discussion)? What parts of the definition are you refuting exist? It seems that this post must have spun off from some other discussion but the relevant parts were not copied over. I am a scientist, and I have had many things happen that I would consider to qualify as 'psi'. I have simply *known* when someone was in a car accident. I have just *known* when someone died. I have also just *known* the exact moment someone read an email from me. These things have not happened consistently as an adult but as a child I always knew when the phone was about to ring, and who was on it. Maybe none of those were strong enough to be considered psi, but if there is even an smidgen of something that could be psi, then there are far more things that could be possible. Please try to keep these discussions civil, as we want to encourage people to share their opinions without feeling attacked by others, otherwise we will not have a diversity of opinions, which is needed to stretch our capacity for reasoning. -Isabelle ~~~~~~~~~~~~~~~~~~~~~~~ Isabelle Hakala "Any person who says 'it can't be done' shouldn't be interrupting the people getting it done." "Do every single thing in life with love in your heart." On Thu, Jan 7, 2010 at 12:34 PM, Damien Broderick wrote: > On 1/7/2010 10:33 AM, spike wrote: > > John I can assure you that the fucking phenomenon exists. But what has >> that >> to do with Psi? I don't see how the two are related in any way. >> > > He's getting confused with the heavy Sigh phenomenon. > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cetico.iconoclasta at gmail.com Thu Jan 7 18:25:36 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Thu, 7 Jan 2010 16:25:36 -0200 Subject: [ExI] Psi (no need to read this post you already know whatitsays ) References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> Isabelle Hakala >>I am a scientist, and I have had many things happen that I would consider to qualify as 'psi'. I have simply *known* when someone >was in a car accident. I have just *known* when someone died. I have also >just *known* the exact moment someone read an email from me. These >things have not happened consistently as an adult but as a child I always >knew when the phone was about to ring, and who was on it. Maybe none of >those were strong enough to be considered psi, but if there is even an >smidgen of something that could be psi, then there are far more things that >could be possible. Well, can you tell which bone I broke last year and what caused it? From nanite1018 at gmail.com Thu Jan 7 19:52:22 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Thu, 7 Jan 2010 14:52:22 -0500 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <2631098A-CF5B-4898-8208-7380C4C8D867@GMAIL.COM> > I am a scientist, and I have had many things happen that I would > consider to qualify as 'psi'. I have simply *known* when someone was > in a car accident. I have just *known* when someone died. I have > also just *known* the exact moment someone read an email from me. > These things have not happened consistently as an adult but as a > child I always knew when the phone was about to ring, and who was on > it. Maybe none of those were strong enough to be considered psi, but > if there is even an smidgen of something that could be psi, then > there are far more things that could be possible. > -Isabelle You did not "always" know these things. As a scientist you should be more careful of bias (as a child, you almost certainly weren't careful to protect against such things). You likely sometimes had a random "feeling" and when it happened you remembered, when something didn't happen, you forgot. Lucky guesses once in a great while about stuff like when people read emails or when someone was in a car wreck (you may have actually retroactively attributed the wreck as the cause of your feeling, when in fact it was something different). There is no evidence that this stuff exists, at least not anything statistically significant. It is surprising to me that many otherwise perfectly rational people buy into the nonsense one finds in the new "metaphysical studies" section of Borders. Joshua Job nanite1018 at gmail.com From ismirth at gmail.com Thu Jan 7 20:01:07 2010 From: ismirth at gmail.com (Isabelle Hakala) Date: Thu, 7 Jan 2010 15:01:07 -0500 Subject: [ExI] Psi (no need to read this post you already know whatitsays ) In-Reply-To: <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> Message-ID: <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> No, not really, it is more passive than that. I don't really have any control over it happening. I had an image of your leg, below your knee, come to mind, and I also had a tree come to mind, but I wouldn't know what the heck that means. With the car accident I suddenly felt what felt like would be "Michael's fear of being in a car accident" (it was Michael in the car accident). I immediately called his cell, and he didn't answer so I drove to his house. He showed up an hour later and he said that about 30 seconds after the accident he heard his cell ringing, but it was trapped under the seat where he couldn't get to it to answer or call me back. Also, with the phone calls, I would regularly yell to my mom to answer the phone, and yell who it was calling, and *then* the phone would ring. This really freaked my mom out and she finally asked me to stop doing it. This was in the last 70's and early 80's. The calls were random and one of the calls was from someone my mother hadn't spoken to in many years, and yet I said who it was before the phone rang, and was correct. When I said someone was calling, and who it was, the phone always rang right away, and the person was always correct. After my mother asked me to stop I couldn't do it anymore. And as a scientist, I think that people need to realize that just because we don't understand something or have proof of it, does NOT mean it doesn't exist. Before we knew what molecules were, or how to see them or detect them, there still were molecules, and anyone would have thought you crazy if you tried to convince them that molecules existed. ~~~~~~~~~~~~~~~~~~~~~~~ Isabelle Hakala "Any person who says 'it can't be done' shouldn't be interrupting the people getting it done." "Do every single thing in life with love in your heart." On Thu, Jan 7, 2010 at 1:25 PM, Henrique Moraes Machado (CI) < cetico.iconoclasta at gmail.com> wrote: > Isabelle Hakala >>I am a scientist, and I have had many things happen that > I would consider to qualify as 'psi'. I have simply *known* when someone > > was in a car accident. I have just *known* when someone died. I have also >> just *known* the exact moment someone read an email from me. These >> things have not happened consistently as an adult but as a child I always >> knew when the phone was about to ring, and who was on it. Maybe none of >> those were strong enough to be considered psi, but if there is even an >> smidgen of something that could be psi, then there are far more things that >> could be possible. >> > > > > Well, can you tell which bone I broke last year and what caused it? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Jan 7 20:11:45 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 7 Jan 2010 21:11:45 +0100 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <580930c21001071211i44e07cb8kca3621d5a4769035@mail.gmail.com> 2010/1/7 Isabelle Hakala : > I am a scientist, and I have had many things happen that I would consider to > qualify as 'psi'. I have simply *known* when someone was in a car accident. > I have just *known* when someone died. I have also just *known* the exact > moment someone read an email from me. These things have not happened > consistently as an adult but as a child I always knew when the phone was > about to ring, and who was on it. Maybe none of those were strong enough to > be considered psi, but if there is even an smidgen of something that could > be psi, then there are far more things that could be possible. Mmhhh. For me, "psi" is simply the fact that we appear to guess the right card, in experiments that can be repeated at will, infinitesimally more often than we statistically should. -- Stefano Vaj From ismirth at gmail.com Thu Jan 7 20:23:42 2010 From: ismirth at gmail.com (Isabelle Hakala) Date: Thu, 7 Jan 2010 15:23:42 -0500 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <580930c21001071211i44e07cb8kca3621d5a4769035@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <580930c21001071211i44e07cb8kca3621d5a4769035@mail.gmail.com> Message-ID: <398dca511001071223q531fc56bj7d9f8f87c08c1786@mail.gmail.com> Also, lots of people as children would be out playing somewhere and then just *know* that they were in trouble, and not even have a clue as to *why*, but just run straight home, and find their parent waiting on the doorstep for them. This happened to me a couple of times, and to several of my friends through my teenage years. I don't think it qualifies for anything that someone can test, but there are circumstances that convince people that we have more abilities than just the obvious ones. ~~~~~~~~~~~~~~~~~~~~~~~ Isabelle Hakala "Any person who says 'it can't be done' shouldn't be interrupting the people getting it done." "Do every single thing in life with love in your heart." On Thu, Jan 7, 2010 at 3:11 PM, Stefano Vaj wrote: > 2010/1/7 Isabelle Hakala : > > I am a scientist, and I have had many things happen that I would consider > to > > qualify as 'psi'. I have simply *known* when someone was in a car > accident. > > I have just *known* when someone died. I have also just *known* the exact > > moment someone read an email from me. These things have not happened > > consistently as an adult but as a child I always knew when the phone was > > about to ring, and who was on it. Maybe none of those were strong enough > to > > be considered psi, but if there is even an smidgen of something that > could > > be psi, then there are far more things that could be possible. > > Mmhhh. For me, "psi" is simply the fact that we appear to guess the > right card, in experiments that can be repeated at will, > infinitesimally more often than we statistically should. > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Jan 7 20:25:34 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 7 Jan 2010 21:25:34 +0100 Subject: [ExI] effect/affect again In-Reply-To: <014A7D44-C323-482C-BF40-3537A46F37BB@bellsouth.net> References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> <4B4236C0.4040307@satx.rr.com> <9AB494A79BAC4691AC83EC32CCDF5D0D@spike> <014A7D44-C323-482C-BF40-3537A46F37BB@bellsouth.net> Message-ID: <580930c21001071225g726be589md2d6e06f847431fc@mail.gmail.com> Strange how if you are Neolatin mother tongue all that does not sound far-fetched in the least... ;-) 2010/1/4 John Clark : > On Jan 4, 2010, ?spike wrote: > Fortunately, Damien is an affable character, even if at times ineffable. > > And redoubtable too when he wasn't being inscrutable. -- Stefano Vaj From aware at awareresearch.com Thu Jan 7 21:11:30 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 13:11:30 -0800 Subject: [ExI] Paper: Redundancy in Systems Which Entertain a Model of Themselves: Interaction Information and the Self-Organization of Anticipation Message-ID: Redundancy in Systems Which Entertain a Model of Themselves: Interaction Information and the Self-Organization of Anticipation An technical paper published 2010-01-06 showing progress in areas of particular interest to me. Related to the problem of quantification of coherence over a context of mutual interaction information as well as to Ulanowicz's notion of using a local reduction in uncertainty based on mutual information among three or more dimensions as an indicator of "ascendency." Heady stuff. This paper might be seen as somewhat related to the issue of Searle's Chinese Room since it does address an approach to quantififying the **observer-dependent** meaningfulness of mutual interaction information within a system, but it has no reason to say anything about the epistemological context of that debate. [Lizbeth said I should try using different words so people wouldn't tend to tune me out. Fortunately these words from Loet Leydesdorff arrived just in time to do the job.] - Jef From thespike at satx.rr.com Thu Jan 7 21:15:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 15:15:27 -0600 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <4B464EEF.1000200@satx.rr.com> On 1/7/2010 11:55 AM, Isabelle Hakala wrote: > Please try to keep these discussions civil, as we want to encourage > people to share their opinions without feeling attacked by others, > otherwise we will not have a diversity of opinions, which is needed to > stretch our capacity for reasoning. That's unlikely, because the topic (for reasons probably having to do with immunization reactions to religion) is toxic to many otherwise open-minded people here, so they react with vehemence and without any appeal to evidence. Some refuse even to consider evidence when it's provided (John Clark, say, who proudly declares that he won't look at anything pretending to be evidence for psi, since he knows a priori that it's BULLSHIT!!!). Anyone interested in my pro-psi opinion will have to read my 350pp book on the topic, OUTSIDE THE GATES OF SCIENCE; I'm tired to repeating in bite-sized chunks what I've already spent a lot of effort writing carefully. (For an opinion of the book and the topic by one very bright and open-minded sometime ExIchat poster, consult Ben Goertzel's review on the amazon site.) Damien Broderick From spike66 at att.net Thu Jan 7 21:22:00 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 13:22:00 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> Message-ID: <6405055B9B8D4FF2AEFD28B62B2C8223@spike> ...On Behalf Of Isabelle Hakala ... Subject: Re: [ExI] Psi (no need to read this post you already knowwhatitsays ) > ...he said that about 30 seconds after the accident he heard his cell ringing, but it was trapped under the seat where he couldn't get to it to answer or call me back... Isabelle, I know how to explain this. Michael had a premonition that you were going to call him, to tell him you had a premonition he had been in an accident. He was fumbling around looking for his cell phone instead of watching where he was going and BOOM. That accident caused your premonition (of which he had already had a premonition) and called him, but by then it was too late. I am having a strange feeling or vision about you. It involves a computer and a chair, but no broken bones or trees. I don't know what it means. > ...Also, with the phone calls, I would regularly yell to my mom to answer the phone, and yell who it was calling, and *then* the phone would ring. This really freaked my mom out and she finally asked me to stop doing it. This was in the last 70's and early 80's... Perhaps your young ears were able to detect the ultra high frequency sound that the 70s era telephone electromechanical devices would make a couple of seconds before the phone rang. Recall those things had a capacitor in them, which had to charge, and the discharge cycle would cause the ring to be the usual intermittent signal. The only reason I know about this is that back in the old days when long distance phone calls cost a lot of money, people would regularly signal each other by prearranging to call at a certain time; the number of rings would be translated to a message. An example is the signal agreed upon in my own misspent youth regarding the approaching redcoats: one if by land, two if by sea. Of course if no one answered, the call was free. The phone company figured it out and responded by de-synchronizing what the caller heard and what the called phone did so that they didn't necessarily agree anymore. Isabelle, young people such as yourself perhaps do not recall the days when ripping off the phone company was great nerd entertainment. Apple computer was started by a bunch of geeks who chose ripping off the phone company over the usual high school preoccupation, attempting to effect(v) recreational copulation, commonly known as the fucking phenomenon. > ...The calls were random and one of the calls was from someone my mother hadn't spoken to in many years, and yet I said who it was before the phone rang, and was correct. When I said someone was calling, and who it was, the phone always rang right away, and the person was always correct. After my mother asked me to stop I couldn't do it anymore... Just a guess, but I will offer an explanation for why I could never do the feat you describe. I had a premonition that my mother ask me to cut the crap with the whole anticipating phone calls phenomenon because it was freaking her beak, and so I could not do it anymore before I actually ever could do it to start with. It was a preemptive attack on my premonitions. As a closing comment on this topic, a weird thing happened to me the other day. I had a strange feeling that nothing would happen. Suddenly and without warning, nothing happened. A minute passed. I looked at my watch. It was a minute past. It is so weird, I can't explain it. spike (Isabelle, you are new here. A warm extropian welcome to you my friend. I am well known in these parts for posting this kind of silliness.) From aware at awareresearch.com Thu Jan 7 21:51:07 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 13:51:07 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Thu, Jan 7, 2010 at 8:28 AM, Aware wrote: > On Thu, Jan 7, 2010 at 6:40 AM, Stathis Papaioannou wrote: >> I agree with Hofstdter's, and your, epiphenomenalism. > > But it's not most people's idea of epiphenomenalism, where the > "consciousness" they know automagically emerges from a system of > sufficient complexity and configuration. ?Rather, its an > epistemological understanding of the (recursive) relationship between > the observer and the observed. Relevant to this discussion is an article in New Scientist, just today: Note all the angry righteous commenters, defending Science against this affront to reductionist materialism. Note too, if you can, that they didn't understand the content that they attack. Only after more than twenty angry comments, someone posted the following: "Sorry, I'm not the best at explaining these things, but read up on phenomenology or qualitative research's epistemology and you should see the thrust of his argument. And to the person who argued that we can't understand a computer according to this argument, that was a bit of a straw man fallacy. The computer, both as the machine and the appearances on the screen are objects or phenomena to be observed, not an observer. Unless the computer is trying to address it's own ontology, it is an observed object being observed by an outside subject making it still under the usual rules of quantitative epistemology. It is when you try to observe the observation of the object that things would get complicated. I can't say whether or not he's correct, but I think it is a useful critique on the epistemology of neuroscience." - Jef - Jef From aware at awareresearch.com Thu Jan 7 22:04:57 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 14:04:57 -0800 Subject: [ExI] Telephone hacking Message-ID: On Thu, Jan 7, 2010 at 1:22 PM, spike wrote: > Recall those things had a capacitor in them, > which had to charge, and the discharge cycle would cause the ring to be the > usual intermittent signal. ?The only reason I know about this is that back > in the old days when long distance phone calls cost a lot of money, people > would regularly signal each other by prearranging to call at a certain time; > the number of rings would be translated to a message. ?An example is the > signal agreed upon in my own misspent youth regarding the approaching > redcoats: one if by land, two if by sea. Reminds me of one of the things I did in MY misspent youth: It turns out that while the phone was ringing, even though it hadn't been answered (picked up) there was already an audio connection available through the circuit. I had a friend about 30 miles away (long distance charges) but we were able to communicate by voice--better than counting number of rings--while the phone was ringing, by using audio amplifiers (essentially an intercom) with capacitive coupling to block the 45VDC and diode clipping to limit the 90VAC ring signal. Oh yeah, I was a wild electronics experimenter in my youth... - Jef From thespike at satx.rr.com Thu Jan 7 22:06:47 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 16:06:47 -0600 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: <4B465AF7.1030709@satx.rr.com> On 1/7/2010 3:51 PM, Aware quoth: > It is when you try to observe the observation of the > object that things would get complicated. I can't say whether or not > he's correct, but I think it is a useful critique on the epistemology > of neuroscience." > > - Jef > > - Jef Is that meta-Jef observing the observation of Jef? From aware at awareresearch.com Thu Jan 7 22:12:46 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 14:12:46 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <4B465AF7.1030709@satx.rr.com> References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <4B465AF7.1030709@satx.rr.com> Message-ID: On Thu, Jan 7, 2010 at 2:06 PM, Damien Broderick wrote: > On 1/7/2010 3:51 PM, Aware quoth: > >> ?It is when you try to observe the observation of the >> object that things would get complicated. I can't say whether or not >> he's correct, but I think it is a useful critique on the epistemology >> of neuroscience." >> >> - Jef >> >> - Jef Hehe. Yes, a bad habit but sometimes good for catching Jef just before he does something impulsive. - Jef From stathisp at gmail.com Thu Jan 7 22:13:49 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 09:13:49 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <446942.27612.qm@web36504.mail.mud.yahoo.com> References: <446942.27612.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/8 Gordon Swobe : > --- On Thu, 1/7/10, Stathis Papaioannou wrote: > >> There *must* be something uncomputable about the behaviour of neurons... > > No. (Of course I don't claim that there must be something uncomputable about neurons, it's only if, as you seem to be saying, p-neurons are impossible that there must be something uncomputable about neurons.) >>... if it can't be copied well enough to make p-neurons, >> artificial neurons which behave exactly like b-neurons but lack the >> essential ingredient for consciousness. This isn't a contingent fact, >> it's a logical requirement. > > Yes and now you see why I claim Cram's surgeon must go in repeatedly to patch the software until his patient passes the Turing test: because the patient has no experience, the surgeon must keep working to meet your logical requirements. The surgeon finally gets it right with Service Pack 9076. Too bad his patient can't know it. The surgeon will be rightly annoyed if the tweaking and patching has not been done at the factory so that the p-neurons just work. -- Stathis Papaioannou From stathisp at gmail.com Thu Jan 7 22:18:06 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 09:18:06 +1100 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <187B6A0A-A8FD-42EB-BC18-A2178641FC72@bellsouth.net> References: <527984.77632.qm@web36507.mail.mud.yahoo.com> <187B6A0A-A8FD-42EB-BC18-A2178641FC72@bellsouth.net> Message-ID: 2010/1/8 John Clark : > On Jan 7, 2010, Gordon Swobe wrote: > > If we knew exactly what physical conditions must exist in the brain for > consciousness to exist, i.e., if we knew everything about the NCC, > > This NCC of yours is gibberish. You state very specifically that it is not > the signals between neurons that produce consciousness, so how can some sort > of magical awareness inside the neuron correlate with anything? You must > have 100 billion independent conscious entities inside your head. The NCC is either gibberish or something trivially obvious, like oxygen, since without it neurons wouldn't work and you would lose consciousness. -- Stathis Papaioannou From steinberg.will at gmail.com Thu Jan 7 22:42:16 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 7 Jan 2010 17:42:16 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <6405055B9B8D4FF2AEFD28B62B2C8223@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> Message-ID: <4e3a29501001071442r3a67071duaedf6deaf9d5fbd3@mail.gmail.com> The "psi" I speak of refers to (and my delineating of this certainly does not mean I am anything but a curious skeptic) any unexplained cognitive phenonema which are predictive or observational like those that I have listed, some, like telepathy, being favored over sillier ones. I know it is trite to bring up the whole "nobody believed regular physics, and regular physics was right; nobody believed quantum physics, and quantum physics was right" thing, but honestly, a mind too closed to at least prove why they feel these things are impossible may not be expressing a truly extropian mindset. It would seem beneficial to at least humor the idea, given that its existence would mean a revelation in the intellectual community. I am sure that all of you in the intellectual elect would be able to devise theories or experiments to prove or disprove. It's hard not to think "How could so many intelligent people think this has value without it actually having value?", but then I think maybe it's just some residual, romantic, magic-world security blanket stuck in my brain, though I am a deterministic nihilist like most of you so I would hope that is not the case. I'm not sure about this, but since when has that ever been an excuse for abandoning intellectual pursuit? You have plenty of time to speculate while waiting for the thread on Dyson shells to update, though I would imagine considerably less time than waiting for the *existence* of Dyson shells to update. So--use some statistics and show why it's theoretically impossible, for science's sake. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Thu Jan 7 22:47:15 2010 From: scerir at libero.it (scerir) Date: Thu, 7 Jan 2010 23:47:15 +0100 (CET) Subject: [ExI] Psi (no need to read this post you already know what it says) Message-ID: <19052174.225601262904435226.JavaMail.defaultUser@defaultHost> > Mmhhh. For me, "psi" is simply the fact that we appear to guess the > right card, in experiments that can be repeated at will, > infinitesimally more often than we statistically should. > Stefano Vaj hey, there are amazing experiments here http://www.parapsych.org/online_psi_experiments.html http://www.fourmilab.ch/rpkp/experiments/ From stathisp at gmail.com Thu Jan 7 22:54:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 09:54:38 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/8 Aware : > Your insistence that it is this simple is prolonging the cycling of > that "strange loop" you're in with Gordon. ?It's not always clear what > Gordon's argument IS--often he seems to be parroting positions he > finds on the Internet--but to the extent he is arguing for Searle, he > is not arguing against functionalism. Searle is explicitly opposed to functionalism. He allows that *some* machine that reproduces the function of the brain would reproduce consciousness but not that *any* machine would do so: computers, beer cans and toilet paper and the CR wouldn't cut it, for example. > Given functionalism, and the "indisputable 1st person evidence" of the > existence of consciousness/qualia/meaning/intensionality within the > system ("where else could it be?"), he points out quite correctly that > no matter how closely one looks, no matter how subtle one's formal > description might be, there's syntax but no semantics in the system. > > So I suggest (again) to you and Gordon, and Searle. that you need to > broaden your context. ?That there is no essential consciousness in the > system, but in the recursive relation between the observer and the > observed. Even (or especially) when the observer and observed are > functions of he same brain, you get self-awareness entailing the > reported experience of consciousness, which is just as good because > it's all you ever really had. Isn't the relationship between the observer and observed a function of the observer-observed system? >> It's not immediately obvious that this is a silly idea, >> and a majority of people probably believe it. > > Your faith in functionalism is certainly a step up from the > assumptions of the silly masses. ?But everyone in this discussion, and > most denizens of the Extropy list, already get this. > > >> ?However, it can be shown >> to be internally inconsistent, and without invoking any assumptions >> other than that consciousness is a naturalistic phenomenon. > > Yes, but that's not the crux of this disagreement. ?In fact, there is > no crux of this disagreement since to resolve it is not to show what's > wrong within, but to reframe it in terms of a larger context. Maybe, but it's also satisfying to show in a debate without introducing extraneous ideas that the premises your opponent presents you with lead to inconsistency. > Searle and Gordon aren't saying that machine consciousness isn't > possible. ?If you pay attention you'll see that once in a while > they'll come right out and say this, at which point you think they've > expressed an inconsistency. ?They're saying that even though it's > obvious that some machines (e.g. humans) do have consciousness, it's > also clear that no formal system implements semantics. ?And they're > correct. What about this idea: there is no such thing as semantics, really. It's all just syntax. -- Stathis Papaioannou From aware at awareresearch.com Fri Jan 8 00:05:04 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 16:05:04 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Thu, Jan 7, 2010 at 2:54 PM, Stathis Papaioannou wrote: > 2010/1/8 Aware : > > Searle is explicitly opposed to functionalism. I admit it's been about 30 years since read Searle's stuff and related commentary in detail, but I think I've kept a clear understanding of where he went wrong--and of which I have yet to see evidence of your understanding. [That sounds a little harsh, doesn't it? (INTJ)] Seems to me that Searle accepts functionalism, but never makes it explicit, possibly for the sake of promoting argument over his beloved point. Seems to me that nearly everyone reacts with some disdain to his apparent affront to functionalism, and then proceeds to argue entirely on that basis. But if you watch carefully, he accepts functionalism, IFF the candidate machine/substrate actually reproduces the function of the brain. But then he goes on to show that for any formal description of any machine, there's no place IN THE MACHINE where understanding actually occurs. He's right about that. But here he goes wrong: He claims that human brains obviously do have understanding, and suggests that he has therefore proved that there is something different about attempts to produce the same in machines. But there's no understanding in the human brain, either, nor any evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER. We don't have understanding in our brains, but we don't need it. Never did. We have only actions, which appear (with good reason) to be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE ACTOR ITSELF. Sure it's non-intuitive. It's Zen. In the true, non-bastardized sense of the word. And if you're gonna design an AI that displays consciousness, then it would be helpful to understand this so you don't spin your wheels trying to figure out how to implement it. >> So I suggest (again) to you and Gordon, and Searle. that you need to >> broaden your context. ?That there is no essential consciousness in the >> system, but in the recursive relation between the observer and the >> observed. Even (or especially) when the observer and observed are >> functions of he same brain, you get self-awareness entailing the >> reported experience of consciousness, which is just as good because >> it's all you ever really had. > > Isn't the relationship between the observer and observed a function of > the observer-observed system? No. The system that is being observed has no place in it where meaning/semantics/qualia/intentionality can be said to exist. If you look closely all you will find is components in a chain of cause and effect. Syntax but no semantics, as Gordon pointed out early on in this discussion. But an observer, at whatever level of recursion, will report meaning in its terms. It may help to consider this: If I ask you (or you ask yourself (Don't worry; it's recursive)) about the redness of an apple that you are seeing, that "experience" never occurs in real-time. It's always only a product of some processing that necessarily takes some time. Real-time experience never happens; it's a logical and practical impossibility, So in any case, the information corresponding to the redness of that apple, its luminance, its saturation, its flaws, its associations with the remembered red of a fire truck, and on and on, is in effect delivered or made available, after some delay, to another system. And that system will do whatever it is that it will do, determined by its nature within that context. In the case of delivery to the system (observer) that is going to find out about that red, then the observer system will then do something with that information (again completely determined by its nature, with that context.) The observer system might remark out loud about the redness of the apple, and remember doing so. It may say nothing, and only store the new perception (of perceiving) the redness. A moment later it may use that perception (from memory) again, of course linked with newly delivered information as well. If at any point the nature of the observer (within context, which might be me asking you what you experienced) focuses attention again on information about its internal state, the process repeats, keeping the observer process pretty well satisfied. From a third-person point of view, there was never any meaning anywhere in the system, including within the observer we just described. But if you ask the observer about the experience, of course it will truthfully report in terms of first-person experience. What more is there to say? > What about this idea: there is no such thing as semantics, really. > It's all just syntax. Yes, well, it all depends on your context, which is what I've been saying all along. - Jef From spike66 at att.net Fri Jan 8 01:16:10 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 17:16:10 -0800 Subject: [ExI] golden ratio discovered in quantum world In-Reply-To: <6405055B9B8D4FF2AEFD28B62B2C8223@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM><79471C131D7F4EE28EE05A725ED29AED@spike><4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com><02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm><398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> Message-ID: <24B110CD0BA14840BA294A133E8051D9@spike> Since we are discussing weirdness, check this: http://www.physorg.com/news182095224.html Golden ratio discovered in a quantum world Researchers from the Helmholtz-Zentrum Berlin f?r Materialien und Energie (HZB, Germany), in cooperation with colleagues from Oxford and Bristol Universities, as well as the Rutherford Appleton Laboratory, UK, have for the first time observed a nanoscale symmetry hidden in solid state matter. They have measured the signatures of a symmetry showing the same attributes as the golden ratio famous from art and architecture. The research team is publishing these findings in Science on the 8 January. On the atomic scale particles do not behave as we know it in the macro-atomic world. New properties emerge which are the result of an effect known as the Heisenberg's Uncertainty Principle. In order to study these nanoscale quantum effects the researchers have focused on the magnetic material cobalt niobate. It consists of linked magnetic atoms, which form chains just like a very thin bar magnet, but only one atom wide and are a useful model for describing ferromagnetism on the nanoscale in solid state matter. When applying a magnetic field at right angles to an aligned spin the magnetic chain will transform into a new state called quantum critical, which can be thought of as a quantum version of a fractal pattern. Prof. Alan Tennant, the leader of the Berlin group, explains "The system reaches a quantum uncertain - or a Schr?dinger cat state. This is what we did in our experiments with cobalt niobate. We have tuned the system exactly in order to turn it quantum critical." By tuning the system and artificially introducing more quantum uncertainty the researchers observed that the chain of atoms acts like a nanoscale guitar string. Dr. Radu Coldea from Oxford University, who is the principal author of the paper and drove the international project from its inception a decade ago until the present, explains: "Here the tension comes from the interaction between spins causing them to magnetically resonate. For these interactions we found a series (scale) of resonant notes: The first two notes show a perfect relationship with each other. Their frequencies (pitch) are in the ratio of 1.618 , which is the golden ratio famous from art and architecture." Radu Coldea is convinced that this is no coincidence. "It reflects a beautiful property of the quantum system - a hidden symmetry. Actually quite a special one called E8 by mathematicians, and this is its first observation in a material", he explains. The observed resonant states in cobalt niobate are a dramatic laboratory illustration of the way in which mathematical theories developed for particle physics may find application in nanoscale science and ultimately in future technology. Prof. Tennant remarks on the perfect harmony found in quantum uncertainty instead of disorder. "Such discoveries are leading physicists to speculate that the quantum, atomic scale world may have its own underlying order. Similar surprises may await researchers in other materials in the quantum critical state." The researchers achieved these results by using a special probe - neutron scattering. It allows physicists to see the actual atomic scale vibrations of a system. Dr. Elisa Wheeler, who has worked at both Oxford University and Berlin on the project, explains "using neutron scattering gives us unrivalled insight into how different the quantum world can be from the every day". However, "the conflicting difficulties of a highly complex neutron experiment integrated with low temperature equipment and precision high field apparatus make this a very challenging undertaking indeed." In order to achieve success "in such challenging experiments under extreme conditions" the HZB in Berlin has brought together world leaders in this field. By combining the special expertise in Berlin whilst taking advantage of the pulsed neutrons at ISIS, near Oxford, permitted a perfect combination of measurements to be made. More information: Quantum Criticality in an Ising Chain: Experimental Evidence for Emergent E8 Symmetry. Article in Science, DOI:RE1180085/JEC/PHYSICS From thespike at satx.rr.com Fri Jan 8 01:56:32 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 19:56:32 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <6405055B9B8D4FF2AEFD28B62B2C8223@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> Message-ID: <4B4690D0.5000200@satx.rr.com> On 1/7/2010 3:22 PM, spike wrote: >> > ...Also, with the phone calls, I would regularly yell to my mom to >> >answer the phone, and yell who it was calling, and*then* the phone would >> > ring. This really freaked my mom out and she finally asked me to stop doing >> > it. This was in the last 70's and early 80's... > Perhaps your young ears were able to detect the ultra high frequency sound > that the 70s era telephone electromechanical devices would make a couple of > seconds before the phone rang. Recall those things had a capacitor in them, > which had to charge, and the discharge cycle would cause the ring to be the > usual intermittent signal. Yes, this sort of explanation is just the kind of thing real parapsychologists immediately look for when examining "natural experiments." And such explanations can sometimes be overlooked and only discovered later. But tell me, Spike, how does this account for controlled double blinded tests where you wait by the phone for a call from one of several randomized callers at a certain time, hear the ring, name the caller, then answer--all of this under observation. Pure chance result, right? What else can it be? But wait--Does the magic capacitor sound different when it's Aunt Jane or Uncle Bill? Maybe so--do tell. See the following, and deconstruct away: quote: < Abstract - Telepathy with the Nolan Sisters Journal of the Society for Psychical Research 68, 168-172 (2004) A FILMED EXPERIMENT ON TELEPHONE TELEPATHY WITH THE NOLAN SISTERS by RUPERT SHELDRAKE, HUGO GODWIN AND SIMON ROCKELL ABSTRACT: The ability of people to guess who is calling on the telephone has recently been tested experimentally in more than 850 trials. The results were positive and hugely significant statistically. Participants had four potential callers in distant locations. At the beginning of each trial, remote from the participant, the experimenter randomly selected one of the callers by the throw of a die, and asked the chosen caller to ring the participant. When the phone rang, the participant guessed who the caller was before picking up the receiver. By chance, about 25% of the guesses would have been correct. In fact, on average 42% were correct. The present experiment was an attempt to replicate previous tests, and was filmed for television. The participant and her callers were all sisters, formerly members of the Nolan Sisters band, popular in Britain in the 1980s. We conducted 12 trials in which the participant and her callers were 1 km apart. Six out of 12 guesses (50%) were correct. The results were significant at the p=0.05 level. For full text in html or pdf formats linked at site> Spike's expected response: well, hmmm, there's got to be a technical reason for this, but I've got to feed the kid now so I'll think about it next week if I can remember. My good pal John Clark's response: That Sheldrake idiot is a fool and made it all up and anyway they cheated and it's all BULLSHIT, I'm not wasting any time reading this crap. My response: Hmm, potentially interesting but the numbers are too small to be anything but extremely provisional, let's see some more replications by other people. Damien Broderick From spike66 at att.net Fri Jan 8 03:18:28 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 19:18:28 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B4690D0.5000200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> Message-ID: > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Damien Broderick > ... > > By chance, about 25% of the guesses would have been correct. > In fact, on average 42% were correct. The present experiment > was an attempt to replicate previous tests, and was filmed > for television. The participant and her callers were all > sisters, formerly members of the Nolan Sisters band, popular > in Britain in the 1980s. We conducted 12 trials in which the > participant and her callers were 1 km apart. Six out of 12 guesses > (50%) were correct. The results were significant at the p=0.05 level. > > > For full text in html or pdf formats linked at site> > > Spike's expected response: well, hmmm, there's got to be a > technical reason for this, but I've got to feed the kid now > so I'll think about it next week if I can remember... Damien Broderick {8^D I do need to go spend some quality time with my favorite larva, but before I do that, I would conjecture that an explanation for p=.05 is not necessarily needed. Many such experiments could have been done, but the only ones we hear about are those which go beyond 5% weird. Regarding the phrase "...was filmed for television..." that makes me suspicious right up front, because it forms a filter: in such a medium, only noteworthy stuff is worthy of note. Consider the Monty Python comedy troupe from the 70s. Their stuff was mostly unscripted, ad-lib, and yet it was hilarious, ja? One wonders how they could possibly be so funny, in front of a live audience no less. Well, they would clown around for hours, then pick the stuff that the audience loved, and that concentrated the laughs to where you have knights that say NI and so forth. They might have had to cut up for 20 hrs to get a good hilarious hour of TV. Similarly, there could have been a number of sister acts, and the only ones that made it to prime time were the Nolans. The most astounding thing about this experiment is not that they managed statistical significance, but rather that one pair of humans could spawn five larvae of such jaw dropping comliness as this group of stunning beauties. http://www.youtube.com/watch?v=P8ACght8QFE Oh my evolution, what a bevy of lovelies are these. I had never heard of them before you pointed to it, and for that I do thank you sir. Their music is gorgeous too. With those looks they could have sang like fingernails on a chalk board, and I would still like them, but that they should all be sisters and all sing like angels is far more remarkable than their performance at phone guessing. spike From thespike at satx.rr.com Fri Jan 8 03:32:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 21:32:14 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> Message-ID: <4B46A73E.2090200@satx.rr.com> On 1/7/2010 9:18 PM, spike wrote: > I would conjecture that an explanation for p=.05 is not > necessarily needed. Many such experiments could have been done, but the > only ones we hear about are those which go beyond 5% weird. > > Regarding the phrase "...was filmed for television..." that makes me > suspicious right up front, because it forms a filter: in such a medium, only > noteworthy stuff is worthy of note. If you're assuming a lack of probity in the experimenter, why not just say the whole thing was scripted or made up and have done with it? This is the bottom line with most skeptical retorts. What would it take to dispose of this canard? Sworn statements from everyone involved? No, people like that, they'd lie for profit, right? Or just because they're deluded loons. Or maybe I just invented it to sell my books. Damien Broderick From spike66 at att.net Fri Jan 8 04:14:46 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 20:14:46 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46A73E.2090200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> Message-ID: <0AB97B96F36746E79B5AE261B3960142@spike> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Damien Broderick > Sent: Thursday, January 07, 2010 7:32 PM > To: ExI chat list > Subject: Re: [ExI] Psi (no need to read this post you already > knowwhatitsays ) > > On 1/7/2010 9:18 PM, spike wrote: > > > ... > > > > Regarding the phrase "...was filmed for television..." that > makes me > > suspicious right up front, because it forms a filter: in such a > > medium, only noteworthy stuff is worthy of note. > > If you're assuming a lack of probity in the experimenter, why > not just say the whole thing was scripted or made up and have > done with it? This is the bottom line with most skeptical > retorts. What would it take to dispose of this canard? Sworn > statements from everyone involved? No, people like that, > they'd lie for profit, right? Or just because they're deluded > loons. Or maybe I just invented it to sell my books. > > Damien Broderick Far too modest are you, Damien. Your books sell themselves, by the outstanding nature of the content. Regarding the experimenters, did they actually claim to have no other groups than the Nolans? I am not accusing them of trickery. I may be willing to accuse the TV producers of amplifying the strangeness, but they should have no heartburn from this, for the job of TV people is to entertain. I had an idea which may have contributed to the remarkable outcome of the Nolan sisters experiment. The sisters would know each others' sleep patterns: who stayed up late, who was the early riser. If the call came in early or late, it may reduce the likely pool by a sister or two in some cases. The experimenters could have been unaware of this themselves, so there is no need for accusations. Another possibility is that family chatter would tip off the sisters if one was temporarily absent from the game: off to her father-in-law's hospital bed for instance, reducing by one the pool of possibilities. The remaining sisters may not think to suspend the game until rejoined by the fifth singing beauty. Dunno Damien. It might be something weird going on, but the proof is terriby elusive almost by design. In engineering, when one gets a greater than 3 sigma result in a measurement, the experiment is assumed flawed and often discarded, thus the footnote often being seen "3 sigma clipping." What the field needs at this point is not more weird experimental results but rather some plausible theoretical basis. Consider cryonics. No one took that seriously until 1986, when St. Eric of Drexler proposed theoretical nanobots which might some day read the configuration of a frozen brain, allowing it to be recreated in a non-frozen medium. With that theoretical basis, the whole notion gained a following, even if still small and fringy. The closest I can come to a theoretical explanation for precognition would be hordes of nanoprocessors (midichlorians?) which live within the body of the human, which communicate among themselves and could theoretically pass information around. Michael gets in an accident, his nanobots contact the nanobots in his sister's body, by physically understandable means. That they would do so if they exist should not be so very extraordinary, for things in the meter-scale world happen very slowly from their point of view. Their being involved in a tire screech and a bone-crushing impact would be analogous to humans watching an infestation of pine beetles devouring a forest. Damien, your being a creative SF writer qualifies you to come up with something better than midichlorians. The point is that for the psi notion to advance any further, its needs a plausible, even if unlikely, explanation more than it needs more experimental data. Lacking that explanation, all weird experimental outcomes will always be dismissed. spike From thespike at satx.rr.com Fri Jan 8 04:38:55 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 22:38:55 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <0AB97B96F36746E79B5AE261B3960142@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> Message-ID: <4B46B6DF.50504@satx.rr.com> On 1/7/2010 10:14 PM, spike wrote: > I had an idea which may have contributed to the remarkable outcome of the > Nolan sisters experiment. The sisters would know each others' sleep > patterns: who stayed up late, who was the early riser. If the call came in > early or late, it may reduce the likely pool by a sister or two in some > cases. The experimenters could have been unaware of this themselves, so > there is no need for accusations. > > Another possibility is that family chatter would tip off the sisters if one > was temporarily absent from the game: off to her father-in-law's hospital > bed for instance, reducing by one the pool of possibilities. The remaining > sisters may not think to suspend the game until rejoined by the fifth > singing beauty. Just to pop out this bit for comment: did you take a moment to read the linked paper, Spike? What you suggest here off the top of your head has absolutely nothing in common with the experiment as described. I can't easily imagine this being acceptable on ExIchat if someone was trying to laugh away/explain away results of professional stem cell work or solar power generation in space or other topics frequently mocked by those who can't be troubled to find out what's actually being done and proposed. Damien Broderick From msd001 at gmail.com Fri Jan 8 04:44:29 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 7 Jan 2010 23:44:29 -0500 Subject: [ExI] golden ratio discovered in quantum world In-Reply-To: <24B110CD0BA14840BA294A133E8051D9@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <24B110CD0BA14840BA294A133E8051D9@spike> Message-ID: <62c14241001072044i1ff0643ay875492563abf3e16@mail.gmail.com> On Thu, Jan 7, 2010 at 8:16 PM, spike wrote: > > Since we are discussing weirdness, check this: > > http://www.physorg.com/news182095224.html quantum fractal and phi ? interesting that a fractal with Hausdorf dimension of phi is connected but non overlapping so what are the physical properties of a system "tuned to quantum critical" ? Hopefully it proves to create something special like room-temperature superconductivity or an even more fantastic property. There didn't seem to be easy to access links to more information. :( I did read a lot about E8, but it was fantastically over my head. From spike66 at att.net Fri Jan 8 05:08:14 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 21:08:14 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46B6DF.50504@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> Message-ID: <8BF0812307DF4DB5A2C71BA55A635B80@spike> > ...On Behalf Of Damien Broderick ... > On 1/7/2010 10:14 PM, spike wrote: > > > I had an idea which may have contributed to the remarkable > outcome of > > the Nolan sisters experiment. The sisters would know each others' > > sleep patterns... > > Just to pop out this bit for comment: did you take a moment > to read the linked paper, Spike? What you suggest here off > the top of your head has absolutely nothing in common with > the experiment as described... Damien Broderick I confess I did not read the details of the experiment, and I accept my scolding. That being said I reiterate my contention that what is missing is some kind of theoretical, even if wildly implausible, explanation. To that end, I propose the following: let us imagine explanations for psi or any supernatural phenom using physics that we can theoretically understand. I proposed midichlorians before, and these are at least theoretically possible. I don't see any reason why it is physically impossible for nanoprocessors to exist, a few trillion atoms, still small enough to be extremely difficult to identify, which ride in or on humans or beasts, watching and listening, learning and communicating among themselves and with their hosts to some extent. I will attempt another one, not original with me of course, the idea having been around for some time: we are already living in a post singularity world, we already exist as software, we are avatars. Weak low-gain feedback loops do exist, intentionally placed in our software, to cause a few observations that defy our explanation or understanding, in order to keep us wondering and searching. Examples would be psi, quantum mechanics, the baffling double slit experiment, the constancy of the speed of light, the nature of love, the mystery of life itself. With this game, I am not asking anyone to actually believe that there are midichlorians or that we are software with intentionally programmed strange-loops, but rather I am asking you to dig deep within your creative minds to propose some kind of wildly implausible but theoretically possible explanations for supernatural phenomena. spike From steinberg.will at gmail.com Fri Jan 8 05:27:41 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 8 Jan 2010 00:27:41 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <8BF0812307DF4DB5A2C71BA55A635B80@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> Message-ID: <4e3a29501001072127t553972aen962506f9435ef204@mail.gmail.com> > > On Fri, Jan 8, 2010 at 12:08 AM, spike wrote: > > With this game, I am not asking anyone to actually believe that there are > midichlorians or that we are software with intentionally programmed > strange-loops, but rather I am asking you to dig deep within your creative > minds to propose some kind of wildly implausible but theoretically possible > explanations for supernatural phenomena. > > spike > ok I just thought of something sort of neat .The sisters didn't have enough trials to be significant, but this might explain other things. A lot of information in the brain is constantly changing. Maybe changing information is modeled mathematically by universal neurological constructs in all humans, to sort away information or something. The info suddenly pops up once in a while in your head, we all have times when we remember something suddenly; it is possible that this remembering is based on a specific mathematical form as well. So: Some people are very close--family, friends, lovers. Many of their memories are identical but for the viewpoint. These memories, as they are stored (objectively?) in their brains (with similar structures because of genetics and upbringing as well as environment and shared experience by non-kin,) begin the process of slow change. These people interact for a long time and thus accrue many of these memorial constructs, gradually morphing in the brain. In the future, one of them has been, consciously or subconsciously, recalling ideas and memories somehow tied to the relationship between himself and the other. That other, with the same thing happening in her brain, recalls the inverse relationship. So when A thinks about B and makes the call, B is already subconsciously anticipating it. It that these sorts of things can happen once in a while? People who are strongly connected are imbued with many of those strong memories, and so, in a rare occurrence, I might know my dad is calling because we both had been thinking about that time we played baseball and it made him want to call me. It might work even better in the short term--my girlfriend and I see a sign on Monday that mentally resurfaces on Wednesday and prompts a call. Prediction is based on physical interpretation of objective information. This is a possible physical method of interpreting objective information, so is this a good enough start or is it too woefully wacky again? _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Fri Jan 8 05:27:56 2010 From: emlynoregan at gmail.com (Emlyn) Date: Fri, 8 Jan 2010 15:57:56 +1030 Subject: [ExI] Alka-Seltzer added to spherical water drop in microgravity Message-ID: <710b78fc1001072127r54563878pf81ae32ee517176a@mail.gmail.com> Alka-Seltzer added to spherical water drop in microgravity http://www.youtube.com/watch?v=bgC-ocnTTto&feature=player_embedded How cool is that? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From thespike at satx.rr.com Fri Jan 8 05:29:31 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 23:29:31 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <8BF0812307DF4DB5A2C71BA55A635B80@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> Message-ID: <4B46C2BB.5000003@satx.rr.com> On 1/7/2010 11:08 PM, spike wrote: > I am not asking anyone to actually believe that there are > midichlorians or that we are software with intentionally programmed > strange-loops, but rather I am asking you to dig deep within your creative > minds to propose some kind of wildly implausible but theoretically possible > explanations for supernatural phenomena. How did "supernatural" get into the discussion? Ah yes, recall my earlier speculation about anaphylactic shock triggered by suspicions of religion? There's no shortage of weird ideas to explain the weird phenomena labeled "psi"--my OUTSIDE THE GATE book canvases quite a few--but they do tend to be built out of handwavium at the moment, just as continental drift was before plate tectonics. Observed nonlocality in time (information exchanges outside the light cone) is a challenge to any routine explanation except maybe the simulation narrative. But there are physicists with ideas on that, such as Richard Shoup of the Boundary Institute.** My bottom line *isn't* the absence of a theory; it's whether there's any solid evidence for the weirdness. And there is. **see for example (Nobody will, I expect.) Damien Broderick From spike66 at att.net Fri Jan 8 05:47:40 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 21:47:40 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46C2BB.5000003@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com><8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> Message-ID: <3A63D7F97C254A96BEF12AE280A0DF06@spike> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Damien Broderick > Sent: Thursday, January 07, 2010 9:30 PM > To: ExI chat list > Subject: Re: [ExI] Psi (no need to read this post you already > knowwhatitsays ) > > On 1/7/2010 11:08 PM, spike wrote: > > I am not asking anyone to actually believe that there are > > midichlorians or that we are software... > > How did "supernatural" get into the discussion? Ah yes, > recall my earlier speculation about anaphylactic shock > triggered by suspicions of religion? I use the term in the general sense, not about god or angels, but rather anything outside our currently understood framework. Supernatural would include advanced spacefaring species for instance, which have a perfectly natural explanation: they evolved, they became technologically advanced, they went looking around. The notion that we are software does require a programmer beyond us currently however. spike From msd001 at gmail.com Fri Jan 8 06:03:59 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 8 Jan 2010 01:03:59 -0500 Subject: [ExI] Alka-Seltzer added to spherical water drop in microgravity In-Reply-To: <710b78fc1001072127r54563878pf81ae32ee517176a@mail.gmail.com> References: <710b78fc1001072127r54563878pf81ae32ee517176a@mail.gmail.com> Message-ID: <62c14241001072203y4ce66d8fj5c651e088db3f605@mail.gmail.com> On Fri, Jan 8, 2010 at 12:27 AM, Emlyn wrote: > Alka-Seltzer added to spherical water drop in microgravity > > http://www.youtube.com/watch?v=bgC-ocnTTto&feature=player_embedded > > How cool is that? Very. :) If you don't have a microgravity environment available to reproduce this at home, try one of the non-newtonian fluid oscillators (cornstarch on a speaker cone) http://www.youtube.com/results?search_query=non-newtonian+fluid From thespike at satx.rr.com Fri Jan 8 06:04:06 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 08 Jan 2010 00:04:06 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <3A63D7F97C254A96BEF12AE280A0DF06@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com><8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <3A63D7F97C254A96BEF12AE280A0DF06@spike> Message-ID: <4B46CAD6.4090608@satx.rr.com> On 1/7/2010 11:47 PM, spike wrote: > I use the term in the general sense, not about god or angels, but rather > anything outside our currently understood framework. So quantum gravity is supernatural now, and conventional QT was supernatural in 1890? This is a very unusual version of the idiom. Why not just use "paranormal"? It's a customary usage and doesn't imply any particular metaphysical stance, IMO, as "supernatural" does in the vernacular. At some stage, when the phenom are understood (or adequately debunked), they will indeed become "normal" but that shift in perspective strikes me as less paradoxical. (Does the paradoxical become doxical when understood? Could genetic engineering create a parrot ox? and so on.) Damien Broderick From max at maxmore.com Fri Jan 8 06:07:13 2010 From: max at maxmore.com (Max More) Date: Fri, 08 Jan 2010 00:07:13 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) Message-ID: <201001080607.o0867Qa2014039@andromeda.ziaspace.com> > > >Spike's expected response: well, hmmm, there's got to be a technical >reason for this, but I've got to feed the kid now so I'll think about it >next week if I can remember. My good pal John Clark's response: That >Sheldrake idiot is a fool and made it all up and anyway they cheated and >it's all BULLSHIT, I'm not wasting any time reading this crap. My >response: Hmm, potentially interesting but the numbers are too small to >be anything but extremely provisional, let's see some more replications >by other people. Yes, the numbers are too small. Also: How many negative trials were *not* reported. In Psi experiments, we rarely hear about "the silent evidence" as Taleb calls it. Max From jonkc at bellsouth.net Fri Jan 8 05:48:03 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 8 Jan 2010 00:48:03 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> Message-ID: On Jan 7, 2010, Aware wrote: >> Because we learned from the history of Evolution that consciousness is easy >> but intelligence is hard. > > Well, that response clearly adds nothing to the discussion Which word didn'y you understand? > and you stripped out my supporting text. I quote just enough material for you to know which part I'm responding to, feel free to strip my text in return, in fact I wish you would. The respond button is a diabolical invention, if people had to laboriously type in all quoted material I'll bet people would get to the point mighty damn fast. >>> before evolutionary processes stumbled upon the additional, supervisory, hack of self-awareness >> >> What you just said is logically absurd. > > Really? Yes really. > I note that you're not asking for any clarification. None was needed, you were perfectly clear, just illogical. >> If consciousness doesn't effect intelligence > > Do you mean literally "If consciousness doesn't produce intelligence" > or do you mean "If consciousness doesn't affect intelligence"? Put it this way, if intelligence didn't automatically produce consciousness then we wouldn't have it because Evolution couldn't even see it much less develop it. > appears that you must harbor a mystical notion The guy is known for disliking mysticism so lets call him a mystic. Boy I never heard that one before! > of "consciousness", that contributes to the somewhat > "intelligent" behavior of the amoeba I make no claim that an amoeba is intelligent or even "intelligent". Others have said that but not me; but I did say that if you accept that hypothetical then it is most certainly conscious. > despite its apparent lack of the neuronal apparatus necessary to support a sense of self. Unless you have just made the scientific discovery of the ages nobody knows what sort of neuronal apparatus are necessary for consciousness. >> there is no way Evolution could have "stumbled upon" the trick of generating consciousness > > It may be relevant that the way evolution (It's not clear why you would capitalize that word) If people can capitalize God I can capitalize Evolution, Scientific Method too. > works is always in terms of blind, stumbling, random variation. That is true but if Evolution stumbles onto something that doesn't help its genes get into the next generation then it has discovered nothing and just keeps on stumbling . > the extent the organism's fitness would be enhanced by the ability to model possible variations on itself Fine, if you're right then the ability of an organism to model itself would change its behavior in such a way that it is more likely to survive than if it lacked this ability; and observing behavior is what the Turing Test is all about. >> In short if even one conscious being exists on Planet Earth >> and if Evolution is true then the Turing Test works; > > Huh? If there were only one conscious being, then wouldn't that have > to be the one judging the Turing Test? Yes. > And if there is no other conscious being, how could any (non-conscious by definition) subject > pass the test So what? The wouldn't pass the test nor should they if the test is valid. > such that the TT would be shown to "work"? The Turing Test will never be proven to work, few things outside of pure mathematics can be, but if you assume that Evolution is true and knowing from direct experience that at least one conscious being exists then you can deduce that the Turing Test must work. > It seems to me that we observe the existence of both classes of > evolved organisms. I belong to the class of conscious evolved organisms and I believe you belong to the same class because you pass the Turing Test. Of course you have no reason to think I'm conscious because you don't believe in the Turing Test, but never-mind. You know from direct experience that you are conscious so how did you come to be? If the same behavior can be produced without consciousness (and that's why the Turing Test doesn't work) then I repeat my question, how did you come to be? Evolution doesn't find or retain traits that don't help an organism survive. If Evolution can see something then so can the Turing Test, and consciousness is something. > > I'm guessing that our disagreement here comes down to different usage > and meaning of the terms "intelligence" and "consciousness" I don't think so. > and it might be significant that you stripped out all evidence and results of > my efforts to effectively define them. You seem not to play fair, so it's not much fun. Oh for God's sake! If somebody wants to read your original post again they certainly have the means of doing so. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Jan 8 06:33:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 08 Jan 2010 00:33:57 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) In-Reply-To: <201001080607.o0867Qa2014039@andromeda.ziaspace.com> References: <201001080607.o0867Qa2014039@andromeda.ziaspace.com> Message-ID: <4B46D1D5.7010001@satx.rr.com> On 1/8/2010 12:07 AM, Max More wrote: > How many negative trials were *not* reported. By "negative trials" I assume you mean something like "runs of trials with outcomes that were not significantly different from mean chance expectation." By "not reported" I assume you mean "deceptively hidden or discarded." My estimate in this case: None of them. Nobody has ever questioned Dr. Sheldrake's probity (although some of his theories are pretty hard to take seriously). Well, Randi did, once, until he was shown to have lied. > In Psi experiments, we > rarely hear about "the silent evidence" In analyses of psi experiments by anomalies researchers, actually we hear all the time about the likelihood and magnitude of what is termed "the file drawer." That's where non-significant results are supposed by critics to be hidden away. The reality is that the file drawer can't *possibly* hide sufficient dud data to account for the observations. I take it you have reason to doubt this; what is your evidence? Here's Dean Radin's THE CONSCIOUS UNIVERSE (not a bad summary) on the file-drawer effect (selective reporting) in one major protocol, and this is now standard: (page 79-80) ?Another factor that might account for the overall success of the ganzfeld studies was the editorial policy of professional journals, which tends to favor the publication of successful rather than unsuccessful studies. This is the ?file-drawer? effect mentioned earlier. Parapsychologists were among the first to become sensitive to this problem, which affects all experimental domains. In 1975 the Parapsychological Association?s officers adopted a policy opposing the selective reporting of positive outcomes. As a result, both positive and negative findings have been reported at the Paraspsychological Association?s annual meetings and in its affiliated publications for over two decades. Furthermore, a 1980 survey of parapsychologists by the skeptical British psychologist Susan Blackmore had confirmed that the file-drawer problem was not a serious issue for the ganzfeld meta-analysis. Blackmore uncovered nineteen complete but unpublished ganzfeld studies. Of those nineteen, seven were independently successful with odds against chance of twenty to one or greater. Thus while some ganzfeld studies had not been published, Hyman and Honorton agreed that selective reporting was not an important issue in this database. Still, because it is impossible to know how many other studies might have been in file drawers, it is common in meta-analyses to calculate how many unreported studies would be required to nullify the observed effects among the known studies. For the twenty-eight direct-hit ganzfeld studies, this figure was 423 file-drawer experiments, a ratio of unreported-to-reported studies of approximately fifteen to one. Given the time and resources it takes to conduct a single ganzfeld session, let alone 423 hypotheitcal unrepoted experiments, it is not surprising that Hyman agreed with Honorton that the file-drawer issue could not plausibly account for the overall results of the psi ganzfeld database. There were simply not enough experimenters around to have conducted those 423 studies. Thus far, the proponent and the skeptic had agreed that the results could not be attributed to chance or to selective reporting practices.?> Damien Broderick From max at maxmore.com Fri Jan 8 08:40:31 2010 From: max at maxmore.com (Max More) Date: Fri, 08 Jan 2010 02:40:31 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) Message-ID: <201001080840.o088ec6A009055@andromeda.ziaspace.com> Damien: I know how frustrating it must be for you to discuss the psi topic on this list. It must be how I often feel when I suggest that maybe climate models are perhaps not highly reliable guides to the present and future... I would be delighted if psi phenomena turned out to be real. For one thing, it would annoy John Clark, who is holding his reflexive rationalism a little too tightly and not allowing it to breath. (John, quite often I enjoy your sharp, brutal expressions of sanity, but on this topic I think you're being overly curt and quick.) For another thing, it would shake up physics and expand our horizons and potentially open new avenues to enhancement. So, I have nothing intrinsically against it. The sources of my strong resistance to accepting claims of real psi phenomena are mainly (a) that it seems to conflict with our best-established knowledge [which, of course, is not an ultimate reason for dismissal], and (b) my past experience with this topic, both in my personal experience and my (now long past, it's true) extensive reading on the topic. My attraction to the idea of paranormal powers would be obvious to anyone who knew me in the mid to late 1970s. I spent quite a bit of time trying to develop psychic abilities when I was about 11 to 14. I read many books, practiced quite a few exercises and magic rituals, tried out several groups (including dowsers, Kabbalists, Transcendental Meditators (who were then promoting the ability to develop "sidhi's" or special powers like invisibility, levitation, and walking through walls), and the Rosicrucians. At the time, I lacked the intellectual tool kit for structured critical thinking, yet I soon saw reasons for doubting each and every claim. Others insisted that I was good at dowsing underground water paths, even though it was obvious that no evidence existed to support the claim. For a while (when I was 12, possibly 13), I convinced myself that I could make a weight-on-a-string swing by the power of my mind, but I eventually realized it my unconscious and very slight movements -- as shown by my inability to cause swinging if the top of the string wasn't connected to my finger... etc. That is, my own experience in both practice and reading revealed the sheer amount of crap out that went under the psi banner. (A search for "psychic" under Books at Amazon shows that all this crap is still there.) As for specific critiques, I don't remember many to cite at the moment. One that I do recall is John Sladek's book, The New Apocrypha. >By "negative trials" I assume you mean something like "runs of trials >with outcomes that were not significantly different from mean chance >expectation." By "not reported" I assume you mean "deceptively hidden or >discarded." Actually, no, that's not (only or mostly) what I mean -- although that is certainly possible and seems to have happened repeatedly in the past. There's a general publication bias against negative results. It's a problem in numerous fields of study. People getting negative results are less likely to write them up carefully and submit them. Publications are less likely to publish them. Still, thanks for your comments and pointers on this issue. It's good see some attention to the problem of silent evidence. I don't buy what I just read on that without more follow-up, but it's an encouraging sign. It maybe that my resistance to claims of psi phenomena are just sour grapes, since in my own life I've never observed the slightest hint of psychic events or abilities. However, past experience makes me extremely reluctant to devote significant time to looking at new evidence (esp. when so much previous new evidence ended up looking bad). That doesn't mean I am certain psi phenomena are all false. I would like your book on the topic, Damien. But, given my past experience and the apparently minor nature of claimed results, it's just not likely that it's going to be a top priority. I know that's annoying and frustrating, but I hope you can understand why I see it that way (and, I suspect, quite a few other people on this list). If it turns out that psychic phenomena really don't exist, it will be disappointing, but perhaps technology can allow us to convincingly fake it or simulate it (no, this isn't an invitation to mention Chinese Rooms). I hope this post is reasonably coherent. Natasha has already got out of bed and gently told me off for staying up so late. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From max at maxmore.com Fri Jan 8 08:43:14 2010 From: max at maxmore.com (Max More) Date: Fri, 08 Jan 2010 02:43:14 -0600 Subject: [ExI] I'm no fool Message-ID: <201001080843.o088hLc2019286@andromeda.ziaspace.com> With the current discussion about psi, and our continuing interest in rational thinking... Recently, I heard a line in a South Park episode that I found extremely funny and really quite deep, paradoxical, and illuminating: "I wasn't born again yesterday" (This was in South Park, season 7, "Christian Rock Hard") Max From cetico.iconoclasta at gmail.com Fri Jan 8 10:27:46 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Fri, 8 Jan 2010 08:27:46 -0200 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> Message-ID: <02e601ca904d$39c73ac0$fd00a8c0@cpdhemm> >>No, not really, it is more passive than that. I don't really have any >>control over it happening. >> I had an image of your leg, below your knee, come to mind, and I also had >> a tree come to mind, >> but I wouldn't know what the heck that means. You've got it partially right (despite being very very vague). It was the tibia and the knee in a motorcycle accident, but there was no tree involved whatsoever, only two puppy dogs that decided to chase each other in a busy street. This little completely unscientific experiment doesn't prove (or disprove) anything. It was just for the sake of curiosity. From cetico.iconoclasta at gmail.com Fri Jan 8 10:35:46 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Fri, 8 Jan 2010 08:35:46 -0200 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM><79471C131D7F4EE28EE05A725ED29AED@spike><4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com><02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm><398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> Message-ID: <02f801ca904e$5853dba0$fd00a8c0@cpdhemm> spike> Perhaps your young ears were able to detect the ultra high frequency sound > that the 70s era telephone electromechanical devices would make a couple > of > seconds before the phone rang. Recall those things had a capacitor in > them, This can happen. I myself can hear the high freq noise of a tube-tv being turned on anywhere in the house. It's very very annoying to the point that I'm really glad crt tvs are almost all gone. From pharos at gmail.com Fri Jan 8 11:42:40 2010 From: pharos at gmail.com (BillK) Date: Fri, 8 Jan 2010 11:42:40 +0000 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) In-Reply-To: <4B46D1D5.7010001@satx.rr.com> References: <201001080607.o0867Qa2014039@andromeda.ziaspace.com> <4B46D1D5.7010001@satx.rr.com> Message-ID: On 1/8/10, Damien Broderick wrote: > Thus far, the proponent and the skeptic had agreed that the results could > not be attributed to chance or to selective reporting practices.? > > Just because I can't work out how the trick was done, doesn't validate the magic trick. Magicians are experts at their trade. (Though people can be quickly trained in how to do psychic 'cold readings'). Similarly, it is not down to me to try to examine the experimental protocol, double-blind checks, honesty or self-delusion of participants, flawed statistical analysis, bias in testing, etc. etc. Other scientists have to able to replicate the experiments. Short tests that involve guessing one of four numbers (or one of four people phoning in), or one of five shapes, are very susceptible to producing runs of 'above or below average' results. That's why when very long runs are done the results do approach the expected average (or the psychics get so bored that their powers fade out). Odd things happen all the time. One man has been struck by lightning ten times, somebody has to win the lottery (sometimes more than once), some gamblers get lucky streaks and other gamblers get losing streaks, and so on. These things happen in a random universe. Random doesn't mean always average. But, anyway, what's the point? If the psi effects are pretty much unpredictable / random then they cannot be used for anything. I want psi powers that are usable and practical. If I could think hard to get friends to phone me, it would save me a fortune in phone bills. Similarly, they ought to know not to phone me when I'm in the shower or in the middle of dismantling a motorcycle engine in the middle of the living-room. BillK From stefano.vaj at gmail.com Fri Jan 8 11:46:26 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 8 Jan 2010 12:46:26 +0100 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46A73E.2090200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> Message-ID: <580930c21001080346x407eb5b7vac63920b890b3b1b@mail.gmail.com> 2010/1/8 Damien Broderick : > If you're assuming a lack of probity in the experimenter, why not just say > the whole thing was scripted or made up and have done with it? This is the > bottom line with most skeptical retorts. What would it take to dispose of > this canard? Sworn statements from everyone involved? No, people like that, > they'd lie for profit, right? Or just because they're deluded loons. Or > maybe I just invented it to sell my books. Of course, the general idea behind science is that if you do not believe something which is being reported, you do not rely on ad personam arguments, you go and see for yourself. Now, even though I have never given it a try, I am inclined to believe that anybody trying to guess cards beyond a wall ends up finding, if he is dedicated enough, a marginal discrepancy between the actual results and the expected statistical distribution which becomes more and more unlikely as the number of trials grows (an interesting experiment that I am not aware has ever been tried, and might have some weight with respect to our little discussion on AGI, is how a PC would perform in similar circumstances: worse, better, just the same?). Some much more dramatic anedoctes reported in your book are really non-repeatable anyway, so I think that you may believe or not that something strange is happening as you please. In any event, strange facts are bound to happen after all, irrespective of the fact that organic brains are involved at all... -- Stefano Vaj From gts_2000 at yahoo.com Fri Jan 8 13:44:39 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 05:44:39 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <640685.78939.qm@web36505.mail.mud.yahoo.com> --- On Thu, 1/7/10, Stathis Papaioannou wrote: >> Yes and now you see why I claim Cram's surgeon must go > in repeatedly to patch the software until his patient passes > the Turing test: because the patient has no experience, the > surgeon must keep working to meet your logical requirements. > The surgeon finally gets it right with Service Pack 9076. > Too bad his patient can't know it. > > The surgeon will be rightly annoyed if the tweaking and > patching has not been done at the factory so that the p-neurons just > work. My point here concerns the fact that because experience affects behavior including neuronal behavior, and because the patient presents with symptoms indicating no experience of understanding language, and because on my account p-neurons != c-neurons, the p-neurons cannot work as advertised "out of the box". The initial operation fails miserably. The surgeon must then keep reprogramming and replacing more natural neurons throughout the patient's brain. He succeeds eventually in creating intelligent and coherent behavior in his patient, but it costs the patient most or all his intentionality. -gts From stefano.vaj at gmail.com Fri Jan 8 13:46:33 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 8 Jan 2010 14:46:33 +0100 Subject: [ExI] atheism In-Reply-To: <504420.54703.qm@web81603.mail.mud.yahoo.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> <504420.54703.qm@web81603.mail.mud.yahoo.com> Message-ID: <580930c21001080546y76cc3988md690611d514c7e9f@mail.gmail.com> 2010/1/7 Kevin Freels : > From: Stefano Vaj > To: ExI chat list > Sent: Wed, January 6, 2010 2:39:10 PM > Subject: Re: [ExI] atheism >> What I mean there is that while it is perfectly normal in everyday >> life to believe things without? any material evidence thereof (the >> existence of cats and sleep does not tell me anything about the >> current state of my cat any more than the existence of number 27 on >> the roulette does not provide any ground for my belief that this is >> the number which is going to win, and therefore on which I should bet, >> at the next throw of the ball), what is abnormal is to claim that such >> assumptions are a philosophical necessity or of ethical relevance. > > It is quite different to say "I am convinced there is no God" than it is to > say "I am not convinced there is a God" > There is no evidence disproving the existence of God so to believe there is > no god is indeed a faith in itself. Why, not any more than acquitting a man from a murder charge because there is no evidence that he is implicated is equal to convicting him... :-) We cannot avoid to form beliefs on things thare are not proved (e.g., whether tomorrow will rain or not) but this has nothing to with "faith" in a monotheistic sense. -- Stefano Vaj From estropico at gmail.com Fri Jan 8 13:58:38 2010 From: estropico at gmail.com (estropico) Date: Fri, 8 Jan 2010 13:58:38 +0000 Subject: [ExI] ExtroBritannia: The Friendly AI Problem: how can we ensure that superintelligent AI doesn't terminate us? Message-ID: <4eaaa0d91001080558s3f06d8dfq88a222ac42b0a30@mail.gmail.com> The Friendly AI Problem: how can we ensure that superintelligent AI doesn't terminate us? Venue: Room 416, Birkbeck College. Date: Saturday 23rd December. Time: 2pm-4pm. About the talk: Suppose that humans succeed in understanding just what it is about the human brain that makes us smart, and manage to port that over to silicon based digital computers. Suppose we succeed in creating a machine that was smarter than us. What would it do? Would we benefit from it? This talk will present arguments that show that there are many different ways that the creation of human-level AI could spell disaster for the human race. It will also cover how we might stave off that disaster - how we might create a superintelligence that is benevolent to the human race. About the speaker: Roko Mijic graduated from the University of Cambridge with a BA in Mathematics, and the Certificate of Advanced Study in Mathematics. He spent a year doing research into the foundations of knowledge representation at the University of Edinburgh and holds an MSc in informatics. He is currently an advisor for the Singularity Institute for Artificial Intelligence. Roko writes the blog "Transhuman goodness" For more details about Roko, see RokoMijic.com ** There's no charge to attend this meeting, and everyone is welcome. There will be plenty of opportunity to ask questions and to make comments. **Discussion will continue after the event, in a nearby pub, for those who are able to stay. ** Why not join some of the Extrobritannia regulars for a drink and/or light lunch beforehand, any time after 12.30pm, in The Marlborough Arms, 36 Torrington Place, London WC1E 7HJ. To find us, look out for a table where there's a copy of the book "Beyond AI: creating the conscience of the machine" displayed. ** Venue: Room 416 is on the fourth floor (via the lift near reception) in the main Birkbeck College building, in Torrington Square (which is a pedestrian-only square). Torrington Square is about 10 minutes walk from either Russell Square or Goodge St tube stations www.extrobritannia.blogspot.com UK Transhumanist Association: www.transhumanist.org.uk From estropico at gmail.com Fri Jan 8 14:06:55 2010 From: estropico at gmail.com (estropico) Date: Fri, 8 Jan 2010 14:06:55 +0000 Subject: [ExI] ExtroBritannia: The Friendly AI Problem: how can we ensure that superintelligent AI doesn't terminate us? In-Reply-To: <4eaaa0d91001080558s3f06d8dfq88a222ac42b0a30@mail.gmail.com> References: <4eaaa0d91001080558s3f06d8dfq88a222ac42b0a30@mail.gmail.com> Message-ID: <4eaaa0d91001080606q31d35240w3cabb7d08f4c25d0@mail.gmail.com> Ooops! Obviously I meant the 23rd of **January**! Cheers, Fabio On Fri, Jan 8, 2010 at 1:58 PM, estropico wrote: > The Friendly AI Problem: how can we ensure that superintelligent AI > doesn't terminate us? > > Venue: Room 416, Birkbeck College. > Date: Saturday 23rd December. > Time: 2pm-4pm. > > About the talk: > > Suppose that humans succeed in understanding just what it is about the > human brain that makes us smart, and manage to port that over to > silicon based digital computers. Suppose we succeed in creating a > machine that was smarter than us. > > What would it do? Would we benefit from it? > > This talk will present arguments that show that there are many > different ways that the creation of human-level AI could spell > disaster for the human race. It will also cover how we might stave off > that disaster - how we might create a superintelligence that is > benevolent to the human race. > > About the speaker: > > Roko Mijic graduated from the University of Cambridge with a BA in > Mathematics, and the Certificate of Advanced Study in Mathematics. He > spent a year doing research into the foundations of knowledge > representation at the University of Edinburgh and holds an MSc in > informatics. He is currently an advisor for the Singularity Institute > for Artificial Intelligence. > > Roko writes the blog "Transhuman goodness" > > For more details about Roko, see RokoMijic.com > > ** There's no charge to attend this meeting, and everyone is welcome. > There will be plenty of opportunity to ask questions and to make > comments. > > **Discussion will continue after the event, in a nearby pub, for those > who are able to stay. > > ** Why not join some of the Extrobritannia regulars for a drink and/or > light lunch beforehand, any time after 12.30pm, in The Marlborough > Arms, 36 Torrington Place, London WC1E 7HJ. To find us, look out for a > table where there's a copy of the book "Beyond AI: creating the > conscience of the machine" displayed. > > ** Venue: > > Room 416 is on the fourth floor (via the lift near reception) in the > main Birkbeck College building, in Torrington Square (which is a > pedestrian-only square). Torrington Square is about 10 minutes walk > from either Russell Square or Goodge St tube stations > > www.extrobritannia.blogspot.com > UK Transhumanist Association: www.transhumanist.org.uk > From gts_2000 at yahoo.com Fri Jan 8 14:00:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 06:00:40 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: Message-ID: <657810.82216.qm@web36506.mail.mud.yahoo.com> Somebody said... > Searle and Gordon aren't saying that machine consciousness isn't > possible. ?If you pay attention you'll see that once in a while > they'll come right out and say this, at which point you think they've > expressed an inconsistency. ?They're saying that even though it's > obvious that some machines (e.g. humans) do have consciousness, it's > also clear that no formal system implements semantics. And they're > correct. Right. For many years I despised Searle, considering him some sort of anti-tech philosophical Luddite. Then I took the time to really study him. I learned that I had based my opinion on a misunderstanding. Even if he's wrong, you won't find many people who better understand the challenge of strong AI. -gts From stathisp at gmail.com Fri Jan 8 15:14:25 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 02:14:25 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <640685.78939.qm@web36505.mail.mud.yahoo.com> References: <640685.78939.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/9 Gordon Swobe : > --- On Thu, 1/7/10, Stathis Papaioannou wrote: > >>> Yes and now you see why I claim Cram's surgeon must go >> in repeatedly to patch the software until his patient passes >> the Turing test: because the patient has no experience, the >> surgeon must keep working to meet your logical requirements. >> The surgeon finally gets it right with Service Pack 9076. >> Too bad his patient can't know it. >> >> The surgeon will be rightly annoyed if the tweaking and >> patching has not been done at the factory so that the p-neurons just >> work. > > My point here concerns the fact that because experience affects behavior including neuronal behavior, and because the patient presents with symptoms indicating no experience of understanding language, and because on my account p-neurons != c-neurons, the p-neurons cannot work as advertised "out of the box". The initial operation fails miserably. The surgeon must then keep reprogramming and replacing more natural neurons throughout the patient's brain. He succeeds eventually in creating intelligent and coherent behavior in his patient, but it costs the patient most or all his intentionality. You say experience affects behaviour, but you are quite happy with the idea that a zombie can reproduce human behaviour without having experience. So what is to stop a p-neuron from behaving like a c-neuron despite lacking experience if nothing stops the zombie from acting like a human, which is arguably a much harder task? -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Jan 8 15:28:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 07:28:36 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <684147.29925.qm@web36506.mail.mud.yahoo.com> --- On Thu, 1/7/10, Stathis Papaioannou wrote: > The NCC is either gibberish or something trivially obvious, > like oxygen, since without it neurons wouldn't work and you > would lose consciousness. As I already mentioned, the presence of oxygen clearly plays a role in whatever physical conditions must exist in the brain for it to have subjective experience. The sum of all those physical conditions = the NCC. Neuroscientists will eventually understand the NCC in great detail. Whatever it turns out to be, we will no doubt someday have the ability to simulate it along with the rest of the brain on a computer just as we can simulate any other physical thing on a computer. And that computer simulation will *appear* conscious, not much different from the way simulations of ice cubes appear cold. -gts From gts_2000 at yahoo.com Fri Jan 8 15:40:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 07:40:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <32395.39984.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/8/10, Stathis Papaioannou wrote: > You say experience affects behaviour, but you are quite > happy with the idea that a zombie can reproduce human behaviour without > having experience. Yes. > So what is to stop a p-neuron from behaving like a > c-neuron despite lacking experience if nothing stops the > zombie from acting like a human, which is arguably a much harder task? Nothing. Just as philosophical zombies are logically possible, so too are p-neurons and for the same reasons. But again the first surgeon who tries a partial replacement in Wernicke's area with p-neurons will run into serious complications. The p-neurons will require lots of programming and patches and so on to compensate for the patient's lack of experience, complications the surgeon did not anticipate because like you he does not realize that the p-neurons don't give the patient the experience of his own understanding. -gts From stathisp at gmail.com Fri Jan 8 15:48:29 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 02:48:29 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/8 Aware : > But if you watch carefully, he accepts functionalism, IFF the > candidate machine/substrate actually reproduces the function of the > brain. But then he goes on to show that for any formal description of > any machine, there's no place IN THE MACHINE where understanding > actually occurs. He explicitly says that a machine could fully reproduce the function of a brain but fail to reproduce the consciousness of the brain. He believes that the consciousness resides in the actual substrate, not the function to which the substrate is put. If you want to extend "function" to include consciousness then he is a functionalist, but that is not a conventional use of the term. > He's right about that. He actually *does* think there is a place in the machine where understanding occurs, if the machine is a brain. > But here he goes wrong: ?He claims that human brains obviously do have > understanding, and suggests that he has therefore proved that there is > something different about attempts to produce the same in machines. > > But there's no understanding in the human brain, either, nor any > evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER. Right. > We don't have understanding in our brains, but we don't need it. > Never did. ?We have only actions, which appear (with good reason) to > be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE > ACTOR ITSELF. Searle would probably say there's no observer in a computer. > Sure it's non-intuitive. ?It's Zen. ?In the true, non-bastardized > sense of the word. And if you're gonna design an AI that displays > consciousness, then it would be helpful to understand this so you > don't spin your wheels trying to figure out how to implement it. You could take the brute force route and copy the brain. > > >>> So I suggest (again) to you and Gordon, and Searle. that you need to >>> broaden your context. ?That there is no essential consciousness in the >>> system, but in the recursive relation between the observer and the >>> observed. Even (or especially) when the observer and observed are >>> functions of he same brain, you get self-awareness entailing the >>> reported experience of consciousness, which is just as good because >>> it's all you ever really had. >> >> Isn't the relationship between the observer and observed a function of >> the observer-observed system? > > No. ?The system that is being observed has no place in it where > meaning/semantics/qualia/intentionality can be said to exist. ?If you > look closely all you will find is components in a chain of cause and > effect. ?Syntax but no semantics, as Gordon pointed out early on in > this discussion. ?But an observer, at whatever level of recursion, > will report meaning in its terms. > > It may help to consider this: > > If I ask you (or you ask yourself (Don't worry; it's recursive)) about > the redness of an apple that you are seeing, that "experience" never > occurs in real-time. ?It's always only a product of some processing > that necessarily takes some time. ?Real-time experience never happens; > it's a logical and practical impossibility, ?So in any case, the > information corresponding to the redness of that apple, its luminance, > its saturation, its flaws, its associations with the remembered red of > a fire truck, and on and on, is in effect delivered or made available, > after some delay, to another system. And that system will do whatever > it is that it will do, determined by its nature within that context. > In the case of delivery to the system (observer) that is going to find > out about that red, then the observer system will then do something > with that information (again completely determined by its nature, with > that context.) ?The observer system might remark out loud about the > redness of the apple, and remember doing so. ?It may say nothing, and > only store the new perception (of perceiving) the redness. ?A moment > later it may use that perception (from memory) again, of course linked > with newly delivered information as well. ?If at any point the nature > of the observer (within context, which might be me asking you what you > experienced) focuses attention again on information about its internal > state, the process repeats, keeping the observer process pretty well > satisfied. ? From a third-person point of view, there was never any > meaning anywhere in the system, including within the observer we just > described. ?But if you ask the observer about the experience, of > course it will truthfully report in terms of first-person experience. > What more is there to say? Searle would say that experience must be an intrinsic property of the matter causing the experience. If not, then it would be possible to get it out of one system reacting to or observing another system as you describe, which would be deriving meaning from syntax, which he believes is a priori impossible. -- Stathis Papaioannou From ddraig at gmail.com Fri Jan 8 09:26:02 2010 From: ddraig at gmail.com (ddraig) Date: Fri, 8 Jan 2010 20:26:02 +1100 Subject: [ExI] Fwd: [ctrl] Nuclear Powered Nanorobots 2Replace Food?-Robert Freitas on How Nuclear-Powered Nanobots Will Allow Us 2Forgo Eating a Square Meal for a Century In-Reply-To: <391531.41185.qm@web53408.mail.re2.yahoo.com> References: <391531.41185.qm@web53408.mail.re2.yahoo.com> Message-ID: Hiyas I am quite astonished to find this on the Conspiracy Theory Research List and not on exichat, but, there you go. Any comments in the body of the text are not mine - Dwayne And here we are: ---------- Forwarded message ---------- Subject: [ctrl] Nuclear Powered Nanorobots 2Replace Food?- Robert Freitas on How Nuclear-Powered Nanobots Will Allow Us 2Forgo Eating a Square Meal for a Century "In the future, we may see a type of pill for replacing food, but experts say it likely would not be a simple compound of chemicals. A pill-sized food replacement system would have to be extremely complex because of the sheer difficulty of the task it was being asked to perform, more complex than any simple chemical reaction could be. The most viable solution, according to many futurists, would be a nanorobot food replacement system. Dr. Robert Freitas, author of the Nanomedicine series and senior research fellow at the Institute for Molecular Manufacturing spoke with FUTURIST magazine senior editor Patrick Tucker about it.' Read ... WFS Update: Robert Freitas on How Nuclear-Powered Nanobots Will Allow Us to Forgo Eating a Square Meal for a Century Tuesday, Dec 29 2009 http://www.acceleratingfuture.com/michael/blog/2009/12/wfs-update-robert-freitas-on-how-nuclear-powered-nanobots-will-allow-us-to-forgo-eating-a-square-meal-for-a-century/ Wow, this surprised me. This is the sort of thing that I would write off as nonsense on first glance if it weren?t from Robert Freitas, who is legendary for the rigor of his calculations [http://www.nanomedicine.com/]. Here?s the bit, from a World Future Society update: The Issue: Hunger The number of people on the brink of starvation will likely reach 1.02 billion ? or one-sixth of the global population ? in 2009, according to the United Nations Food and Agriculture Organization (FAO). In the United States, 36.2 million adults and children struggled with hunger at some point during 2007. The Future: The earth?s population is projected to increase by 2.5 billion people in the next four decades, most of these people will be born in the countries that are least able to grow food. Research indicates that these trends could be offset by improved global education among the world?s developing populations. Population declines sharply in countries where almost all women can read and where GDP is high. As many as 2/3 of the earth?s inhabitants will live in water-stressed area by 2030 and decreasing water supplies will have a direct effect on hunger. Nearly 200 million Africans are facing serious water shortages. That number will climb to 230 million by 2025, according to the United Nations Environment Program. Finding fresh water in Africa is often a huge task, requiring people (mostly women and children) to trek miles to public wells. While the average human requires only about 4 liters of drinking water a day, as much as 5,000 liters of water is needed to produce a person?s daily food requirements. Futurist Fixes 1. The Food Pill. In the future, we may see a type of pill for replacing food, but experts say it likely would not be a simple compound of chemicals. A pill-sized food replacement system would have to be extremely complex because of the sheer difficulty of the task it was being asked to perform, more complex than any simple chemical reaction could be. The most viable solution, according to many futurists, would be a nanorobot food replacement system. Dr. Robert Freitas, author of the Nanomedicine series and senior research fellow at the Institute for Molecular Manufacturing spoke with FUTURIST magazine senior editor Patrick Tucker about it. In his books and various writings, Freitas has described several potential food replacement technologies that are somewhat pill-like. The key difference, however, is that instead of containing drug compounds, the capsules would contain thousands of microscopic robots called nanorobots. These would be in the range of a billionth of a meter in size so they could easily fit into a large capsule, though a capsule would not necessarily be the best way to administer them to the body. Also, while these microscopic entities would be called ?robots,? they would not necessarily be composed of metal or possess circuitry. They would be robotic in that they would be programmed to carry out complex and specific functions in three-dimensional space. One food replacement Dr. Freitas has described is nuclear powered nanorobots. Here?s how these would work: the only reason people eat is to replace the energy they expend walking around, breathing, living life, etc. Like all creatures, we take energy stored in plant or animal matter. Freitas points out that the isotope gadolinium-148 could provide much of the fuel the body needs. But a person can?t just eat a radioactive chemical and hope to be healthy, instead he or she would ingest the gadolinium in the form of nanorobots. The gadolinium-powered robots would make sure that the person?s body was absorbing the energy safely and consistently. Freitas says the person might still have to take some vitamin or protein supplements but because gadolinium has a half life of 75 years, the person might be able to go for a century or longer without a square meal. For people who really like eating but don?t like what a food-indulgent lifestyle does to their body, Freitas has two other nanobot solutions. ?Nutribots? floating through the bloodstream would allow people to eat virtually anything, a big fatty steak for instance, and experience very limited weight or cholesterol gain. The nutribots would take the fat, excess iron, and anything else that the eater in question did not want absorbed into his or her body and hold onto it. The body would pass the nurtibots, and the excess fat, normally out of the body in the restroom. A nanobot Dr. Freitas calls a ?lipovore? would act like a microscopic cosmetic surgeon, sucking fat cells out of your body and giving off heat, which the body could convert to energy to eat a bit less. Where can you read more about Robert Freitas?s ideas? In the January-February 2010 issue of THE FUTURIST magazine, Freitas lays out his ideas for improving human health through nanotechnology. Yes, there are many other technologies that could help out better with hunger right now. The most important are the three initiatives singled out by Giving What We Can as being high-leverage intervention points: schistosomiasis control, stopping tuberculosis, and the regular delivery of micronutrient packages. Another is the iodization of salt. How can these stop hunger? Well, the diseases and ill health caused by the absence of these measures is so great that alleviating them will increase the total amount of time that people have available to engage in farming, which in the short term will alleviate hunger more effectively than any direct measure. Delivering food in the form of aid fosters dependence. Anyway, the summary of Freitas? food bot ideas above seems very limited. I?m sure that Freitas has worked out the design in greater detail. For instance, are the nanobots he is talking about is powered through a radioisotope rather than a nuclear fission plant, and the text doesn?t make that clear enough, in my opinion. I wonder ? how is it that gadolinium can be broken down into all the nutrients the body needs? Wouldn?t a large amount be required, because fueling the chemical reactions of the body requires bulk and mass no matter how you slice it? I am seeing a lot of technical questions and holes in the idea, as it is brusquely presented above. I will email Freitas and ask him to point us to the proper writings. -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From stathisp at gmail.com Fri Jan 8 16:06:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 03:06:47 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <417015.40057.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/9 Gordon Swobe : > --- On Fri, 1/8/10, Stathis Papaioannou wrote: > >> You say experience affects behaviour, but you are quite >> happy with the idea that a zombie can reproduce human behaviour without >> having experience. > > Yes. > >> So what is to stop a p-neuron from behaving like a >> c-neuron despite lacking experience if nothing stops the >> zombie from acting like a human, which is arguably a much harder task? > > Nothing. But again the first surgeon who tries a partial replacement in Wernicke's area with p-neurons will run into serious complications. The p-neurons will require lots of programming and patches and so on to compensate for the patient's lack of experience, complications the surgeon did not anticipate because like you he does not realize that the p-neurons don't give the patient the experience of his own understanding. I think I see what you mean now. The generic p-neurons can't have any information about language pre-programmed, so the patient will have to learn to speak again. However, the same problem will occur with the c-neurons. In both cases the patient will have the capacity to learn language, and after some effort will both appear to learn language, equally quickly and equally well since the p-neurons and c-neurons are functionally equivalent. However, Sam will truly understand what he is saying while Cram will behave as if he understands what he is saying and believe that he understands what he is saying, without actually understanding anything. Is that right? An alternative experiment involves more advanced techniques whereby everyone's brain is continually scanned throughout their life, so that if they suffer a brain injury the damaged part can be replaced with neurons in exactly the same configuration as the originals, so that the patient does not lose memories or abilities. In this case, Sam and Cram would both wake up and immediately declare that they have had the power of language restored. -- Stathis Papaioannou From stefano.vaj at gmail.com Fri Jan 8 16:07:02 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 8 Jan 2010 17:07:02 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> 2010/1/8 Stathis Papaioannou : > He explicitly says that a machine could fully reproduce the function > of a brain but fail to reproduce the consciousness of the brain. I suspect a few list members have stopped by now to follow this thread, and I am reading it myself on and off, but I really wonder: am I really the only one who thinks this to be a contradiction in terms, not allowing any sensible answer? Or that mystical concepts of "coscience" do not bear close inspection in the first place, so making any debate on the possibility of emulation on systems different from organic brains rather moot? -- Stefano Vaj From stathisp at gmail.com Fri Jan 8 16:27:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 03:27:56 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> Message-ID: 2010/1/9 Stefano Vaj : > 2010/1/8 Stathis Papaioannou : >> He explicitly says that a machine could fully reproduce the function >> of a brain but fail to reproduce the consciousness of the brain. > > I suspect a few list members have stopped by now to follow this > thread, and I am reading it myself on and off, but I really wonder: am > I really the only one who thinks this to be a contradiction in terms, > not allowing any sensible answer? Or that mystical concepts of > "coscience" do not bear close inspection in the first place, so making > any debate on the possibility of emulation on systems different from > organic brains rather moot? At first glance it looks coherent. I really would like to know before installing such a machine in my head to replace my failing neurons whether my consciousness, whatever it is, will remain intact. It can be demonstrated to my satisfaction that the machine will function exactly the same as brain tissue, but is that enough? I want a *guarantee* that I'll feel just the same after the procedure. I think that guarantee can be provided by considering the absurd consequences should it actually be the case that brain function and consciousness are separable. -- Stathis Papaioannou From aware at awareresearch.com Fri Jan 8 16:42:08 2010 From: aware at awareresearch.com (Aware) Date: Fri, 8 Jan 2010 08:42:08 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Fri, Jan 8, 2010 at 7:48 AM, Stathis Papaioannou wrote: > 2010/1/8 Aware : > >> But if you watch carefully, he accepts functionalism, IFF the >> candidate machine/substrate actually reproduces the function of the >> brain. But then he goes on to show that for any formal description of >> any machine, there's no place IN THE MACHINE where understanding >> actually occurs. > > He explicitly says that a machine could fully reproduce the function > of a brain but fail to reproduce the consciousness of the brain. He > believes that the consciousness resides in the actual substrate, not > the function to which the substrate is put. If you want to extend > "function" to include consciousness then he is a functionalist, but > that is not a conventional use of the term. Searle is conflicted. Just not at the level you (and most others) keep focusing on. When I first read of the Chinese Room back in the early 80s my first reaction was a bit of disdain for his "obvious" lack of respect for scientific materialism. But at the same time I had the nagging thought that this is an intelligent guy, so maybe there's something more subtle going on (even though he's still wrong) and look at how the arguments just keep going around and around. The next time I came back to it, later in the 80s, it made complete sense to me (while he was still wrong but still getting a lot of mileage out of his ostensible paradox.) As I've said before on this list, paradox is always a matter of insufficient context. In the bigger picture all the pieces must fit. >> He's right about that. > > He actually *does* think there is a place in the machine where > understanding occurs, Yes, I've emphasized that mistaken premise as loudly as I could, a few times. > if the machine is a brain. or a "fully functional equivalent", WHATEVER THAT MEANS. Note that Searle, like Chalmers, does not provide any resolution, but only emphasizes "the great mystery", the "hard problem" of consciousness. >> But here he goes wrong: ?He claims that human brains obviously do have >> understanding, and suggests that he has therefore proved that there is >> something different about attempts to produce the same in machines. >> >> But there's no understanding in the human brain, either, nor any >> evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER. > > Right. > >> We don't have understanding in our brains, but we don't need it. >> Never did. ?We have only actions, which appear (with good reason) to >> be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE >> ACTOR ITSELF. > > Searle would probably say there's no observer in a computer. I agree that's what he would say. It works well with popular opinion, and keeps the discussion spinning around and around. >> Sure it's non-intuitive. ?It's Zen. ?In the true, non-bastardized >> sense of the word. And if you're gonna design an AI that displays >> consciousness, then it would be helpful to understand this so you >> don't spin your wheels trying to figure out how to implement it. > > You could take the brute force route and copy the brain. Yes, Markham is working on implementing something like that, and Kurzweil uses that as his limiting case for predicting the arrival of "human equivalent" artificial intelligence. There are complications with that "obvious" approach, but I have no desire to embark on another, likely fruitless, thread at this time. >>>> So I suggest (again) to you and Gordon, and Searle. that you need to >>>> broaden your context. ?That there is no essential consciousness in the >>>> system, but in the recursive relation between the observer and the >>>> observed. Even (or especially) when the observer and observed are >>>> functions of he same brain, you get self-awareness entailing the >>>> reported experience of consciousness, which is just as good because >>>> it's all you ever really had. >> described. ?But if you ask the observer about the experience, of >> course it will truthfully [without deception] report in terms of first-person experience. >> What more is there to say? > > Searle would say that experience must be an intrinsic property of the > matter causing the experience. If not, then it would be possible to > get it out of one system reacting to or observing another system as > you describe, which would be deriving meaning from syntax, which he > believes is a priori impossible. As far as I know, he does NOT say that "experience" (qualia/meaning/intentionality/consciousness/self/free-will) must be an intrinsic property of the matter. He appears content to present it as a great mystery, one that quite conveniently pushes people's buttons by appearing on one side to elevate the status of humans as somehow possessing a special quality, and on the other side by offending the righteous sensibilities of those who feel they must defend scientific materialism. It's all good for Searle as the debate swirls around him and around and around... - Jef From stefano.vaj at gmail.com Fri Jan 8 16:59:03 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 8 Jan 2010 17:59:03 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> Message-ID: <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> 2010/1/8 Stathis Papaioannou : > At first glance it looks coherent. I really would like to know before > installing such a machine in my head to replace my failing neurons > whether my consciousness, whatever it is, will remain intact. As long as you "wrongly" believe to be conscious after such install, what difference does it make? Not only "practical" difference, mind, even *merely logical* difference... Everybody is a Dennett's zimbo, nobody is... -- Stefano Vaj From thespike at satx.rr.com Fri Jan 8 17:06:31 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 08 Jan 2010 11:06:31 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) In-Reply-To: <201001080840.o088ec6A009055@andromeda.ziaspace.com> References: <201001080840.o088ec6A009055@andromeda.ziaspace.com> Message-ID: <4B476617.8060800@satx.rr.com> On 1/8/2010 2:40 AM, Max More wrote: > It maybe that my resistance to claims of psi phenomena are just sour > grapes, since in my own life I've never observed the slightest hint of > psychic events or abilities. Good post, Max. thanks. I have seen very little evidence myself *in my own life*--but I seem to be pretty much the contrary of the type of temperament that seems to function "psychically". I can't see very well either, but that doesn't make me disbelieve in sight. > However, past experience makes me extremely reluctant to devote > significant time to looking at new evidence (esp. when so much previous > new evidence ended up looking bad). I'm not sure that's true if one sticks to rigorous tests of fairly minimal claims. I hope it's obvious that I'm not all puppy dog excited about solar astrology, dowsing, Rosicrucian secrets from Atlantis, ghosts that clank in the night, "psychotronic weapons", Mayan 2012 apocalyptic prophecies, and lots of other inane topics that fill the Coast-to-Coat airwaves. > I would like your book on the topic, Damien. But, given my past > experience and the apparently minor nature of claimed results, it's just > not likely that it's going to be a top priority. I know that's annoying > and frustrating, but I hope you can understand why I see it that way > (and, I suspect, quite a few other people on this list). I do understand that, of course, but what offends yet also grimly amuses me is the conditioned reflex scorn--the sort of thing our friend John Clark specializes in--that complacently dismisses years of careful work without knowing the first thing about it. As you say, you read a lot on this topic when you were a kid, tried some magick, etc, so obviously you don't fall into this category--or not quite, because I suspect you're still a victim of premature closure. I know how that works, because I was in the same boat for years. I was enthusiastic about psi claims as a young adolescent, mostly from reading sf editorials about Rhine etc, then stopped taking it seriously as a university student. I read all the pop-critical books whacking away at the loonies, the Scientologists, etc, with great relish. Then when I was nearing 30 I got interested again after reading a paper about a university study that had worked, and came up with some approaches that seemed promising. (Years later I found out that the same ideas were being explored at the same time, or a bit later, by the well-funded CIA and military researchers in what was eventually known as Star Gate.) Curious, but unable to afford massive research, I went back to old published data and saw that when some elementary information theory was applied to it, out popped rather startling indications that psi was real after all. This was especially impressive when it showed up in data from experiments that had apparently failed. (If suppressing "negative results" had been the rule, I'd never have seen this data; luckily, parapsychologists in the 1930s and 1950s were often prepared to publish what looked like failed experiments.) Subsequently, no-one was more surprised that I to discover that serious "remote viewing" claims--Joe McMoneagle's and Stephan Schwartz's, say--were often corroborated (despite the encrustation of bogosity from scammers now claiming falsely to have been big wheels in Star Gate). So why aren't psychics rich? Why is Osama still running free? (Gee, who would gain from that?) Why do we bother with cars instead of levitating? Good questions, but then if there are antibiotics why do people still get sick, and if there's dark matter why isn't there a really good theory to explain it, and on and on. Damien Broderick From pjmanney at gmail.com Fri Jan 8 17:47:04 2010 From: pjmanney at gmail.com (PJ Manney) Date: Fri, 8 Jan 2010 09:47:04 -0800 Subject: [ExI] H+ Magazine blog on Bitch Slap Message-ID: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com> For a bit of shameless, self-promotional fun today: http://www.hplusmagazine.com/editors-blog/how-bitch-slap-will-bring-about-singularity PJ From jrd1415 at gmail.com Fri Jan 8 18:25:24 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 8 Jan 2010 11:25:24 -0700 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: Isabelle Hakala : > Please try to keep these discussions civil, as we want to encourage people > to share their opinions without feeling attacked by others, otherwise we > will not have a diversity of opinions, which is needed to stretch our > capacity for reasoning. > > -Isabelle Hear, hear! (or is it Here, here! Whatever.) Welcome to the list, Isabelle. And of course, your reminder re the benefits of civility is always,...well... worth reminding... others...of. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From jonkc at bellsouth.net Fri Jan 8 18:40:45 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 8 Jan 2010 13:40:45 -0500 Subject: [ExI] Psi. (no need to read this post you already know what itsays ) In-Reply-To: <4B464EEF.1000200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <4B464EEF.1000200@satx.rr.com> Message-ID: <6EA6CEF2-439B-49A1-BBC8-0ACA23256DF2@bellsouth.net> On Jan 7, 2010, Damien Broderick wrote: > Some refuse even to consider evidence when it's provided (John Clark, say, who proudly declares that he won't look at anything pretending to be evidence for psi, since he knows a priori that it's BULLSHIT!!!). That is only partially true, I'm more than willing to look at evidence provided it really is evidence. However its true I'm not willing to look at the "evidence" posted on a website by somebody I've never heard of because there is no web of trust between me the reader and the originator of this "evidence" as there is in a legitimate Scientific journal. As a result the only thing stuff like this is really evidence for is that somebody knows how to type. At the start of every year for the last 10 years I've made a paranormal prediction for the coming year; I've predicted that a positive Psi (or ESP or spiritualism) article will NOT appear in Nature or Science or Physical Review Letters for the next year, and I've been proven right each and every year; I must be psychic. I made an identical prediction for this year. Anybody want to bet against me? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Jan 8 19:08:52 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 08 Jan 2010 13:08:52 -0600 Subject: [ExI] Psi. (no need to read this post you already know what itsays ) In-Reply-To: <6EA6CEF2-439B-49A1-BBC8-0ACA23256DF2@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <4B464EEF.1000200@satx.rr.com> <6EA6CEF2-439B-49A1-BBC8-0ACA23256DF2@bellsouth.net> Message-ID: <4B4782C4.5010408@satx.rr.com> On 1/8/2010 12:40 PM, John Clark wrote: > At the start of every year for the last 10 years I've made a paranormal > prediction for the coming year; I've predicted that a positive Psi (or > ESP or spiritualism) article will NOT appear in Nature or Science or > Physical Review Letters for the next year No, you've made a sociologically astute prediction based on your implicit knowledge of the reigning paradigm prejudices of those journals. I can make a similar prediction: if any scientist known to be critical or skeptical of psi-claims does publish a replication paper (in a peer-reviewed journal willing to print it) supporting the reality of such phenomena, he or she will immediately become known and mocked as a "believer" and ignored by all decent right-thinking scientists, at least on that topic. (This happens routinely in science, for understandable reasons. An example discussed recently in the NYT, IIRC, was a woman whose work on epigenetic effects was dropped scornfully into the trash bin in front of her by a senior scientist she showed it to, hindering her work for many years.) Damien Broderick From jonkc at bellsouth.net Fri Jan 8 18:43:59 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 8 Jan 2010 13:43:59 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <684147.29925.qm@web36506.mail.mud.yahoo.com> References: <684147.29925.qm@web36506.mail.mud.yahoo.com> Message-ID: <6CA3F4EB-CE15-4444-9A4C-22360E3CDB14@bellsouth.net> On Jan 8, 2010, Gordon Swobe wrote: > the presence of oxygen clearly plays a role in whatever physical conditions must exist in the brain for it to have subjective experience. So without oxygen you will die, well it may not be profound but unlike most of your utterances at least it is true. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 8 18:51:39 2010 From: spike66 at att.net (spike) Date: Fri, 8 Jan 2010 10:51:39 -0800 Subject: [ExI] H+ Magazine blog on Bitch Slap In-Reply-To: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com> References: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com> Message-ID: <55B7EDA404DD4D778F27C240C759FC43@spike> > ...On Behalf Of PJ Manney ... > Subject: [ExI] H+ Magazine blog on Bitch Slap > > For a bit of shameless, self-promotional fun today: > > http://www.hplusmagazine.com/editors-blog/how-bitch-slap-will- > bring-about-singularity > > PJ Hmmm. PJ, I think I'll skip the whole Bitch Slap scene, thanks. The dreamy Nolan Sisters are more my style. They cause me to struggle for life extension just so I can preserve them for future generations. {8^D spike From pharos at gmail.com Fri Jan 8 19:25:04 2010 From: pharos at gmail.com (BillK) Date: Fri, 8 Jan 2010 19:25:04 +0000 Subject: [ExI] H+ Magazine blog on Bitch Slap In-Reply-To: <55B7EDA404DD4D778F27C240C759FC43@spike> References: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com> <55B7EDA404DD4D778F27C240C759FC43@spike> Message-ID: On 1/8/10, spike wrote: > Hmmm. PJ, I think I'll skip the whole Bitch Slap scene, thanks. The dreamy > Nolan Sisters are more my style. They cause me to struggle for life > extension just so I can preserve them for future generations. {8^D > > In the UK, at the height of their fame in the 1970s just about every young male of that era went to bed dreaming about being attacked by the Nolan sisters. ;) BillK From spike66 at att.net Fri Jan 8 19:22:30 2010 From: spike66 at att.net (spike) Date: Fri, 8 Jan 2010 11:22:30 -0800 Subject: [ExI] Psi (no need to read this post you already know whatitsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM><79471C131D7F4EE28EE05A725ED29AED@spike><4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <3B410568F34D40948CD9713FAFCC024B@spike> > ...On Behalf Of Jeff Davis > ... > Isabelle Hakala : > > > Please try to keep these discussions civil... -Isabelle > > Hear, hear! (or is it Here, here! Whatever.) Neither Jeff, but rhather: Hear Here. Think about it, makes perfect sense. (Otherwise it would have been Hear squared.) Back in the old days before sound amplification, government was accomplished by a bunch of politicians in a single room discussing some topic. Perhaps several were talking at the same time. When someone uttered some noteworthy comment, a bystander would say Hear Here! to call the attention of the others. The rhyme makes it better than Listen Here. > Welcome to the list, Isabelle. And of course, your reminder > re the benefits of civility is always,...well... worth reminding... > others...of. Best, Jeff Davis Read here! ^^^^ spike From spike66 at att.net Fri Jan 8 20:30:08 2010 From: spike66 at att.net (spike) Date: Fri, 8 Jan 2010 12:30:08 -0800 Subject: [ExI] H+ Magazine blog on Bitch Slap In-Reply-To: References: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com><55B7EDA404DD4D778F27C240C759FC43@spike> Message-ID: <4E297F8025CE417EAC49376D5F8A48FA@spike> > ...On Behalf Of BillK > Subject: Re: [ExI] H+ Magazine blog on Bitch Slap > > On 1/8/10, spike wrote: > > ...The dreamy Nolan Sisters are more my style... spike > > In the UK, at the height of their fame in the 1970s just > about every young male of that era went to bed dreaming about > being attacked by the Nolan sisters. ;) BillK If you meant attacked in the sexual sense, that would be quite unlikely, for it is impossible to rape the willing. Nay, far beyond willing, eager. Strange that I had never heard of them. Perhaps they never did much on American television or radio. Other yanks, are the Nolans new to you? YouTube forms a kind of time machine. SF writers of the past did not envision time travel in this indirect manner, but it might even have some advantages over actual time travel: we get the Nolans free, without having to go back to 1978 and buy a ticket to their concert or suffer through commercial messages on TV. I am now thinking there is something magic about Ireland. That island has produced at least two groups of five stunning beauties: the Nolan sisters in the 1970s and three decades later the Celtic Women: http://www.youtube.com/watch?v=LHOyPLSVam4&feature=related The youngest of these in this video, the stunning Hayley Westenra, is actually from Australia, another island continen fifth Celtic Woman not shown here, M?ir?ad Nesbitt, who is not only to-stay-alive-for* gorgeous, but is also a monster talent on the fiddle and a lithe dancer. Check out the simultaneous dancing, fiddling, and being beautiful: http://vids.myspace.com/index.cfm?fuseaction=vids.individual&videoID=2022329 931 One's fondest dream could never do justice to these. The US has a much larger population, but has not produced or enjoyed such native talent as Ireland and Australia since Karen Carpenter perished on the darkly memorable day, 4 February 1983. spike *The more common expression to-die-for gorgeous makes no sense. Rather the opposite: picture a dying patient making the choice between continuing the painful radiation and chemotherapy or to let nature take its course. Chancing to see M?ir?ad or for that matter Hayley or any of the other Celtic Women, the patient may reverse course and decide it is worth it to have a little more time to enjoy these to-stay-alive-for gorgeous ladies. From gts_2000 at yahoo.com Fri Jan 8 23:26:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 15:26:45 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <366490.44362.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/8/10, Stathis Papaioannou wrote: > I think I see what you mean now. The generic p-neurons > can't have any information about language pre-programmed, so the patient > will have to learn to speak again. However, the same problem will occur > with the c-neurons. Replacement with c-neurons would work in straightforward manner even supposing the patient might need to relearn language. But with p-neurons he will have no experience of understanding words even after his surgeon programs them. And because the experience of understanding words affects the behavior of neurons associated with that understanding, our surgeon/programmer of p-neurons faces a tremendous challenge, one that his c-neuron replacing colleagues needn't face. > However, Sam will truly understand what he is saying while Cram will > behave as if he understands what he is saying and believe that he > understands what he is saying, without actually > understanding anything. Is that right? He will behave outwardly as if he understands words but he will not "believe" anything. He will have weak AI. -gts From gts_2000 at yahoo.com Fri Jan 8 23:57:09 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 15:57:09 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: Message-ID: <892247.48619.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/8/10, Aware wrote: > But there's no understanding in the human brain, > either, nor any evidence for it, EXCEPT FOR THE REPORTS OF AN > OBSERVER. And what do you think that observer is reporting about, Jef? Without your self-reported observations of your own understanding, you would lack intentionality. You would have only weak AI. > We don't have understanding in our brains, but we > don't need it. Absurd. I suppose you think your sentence above has meaning, and that you understand it. I suppose also that if I removed certain parts of your brain or impaired it with drugs then you would cease to understand it. Sure seems to me that you understand words in your brain. -gts From aware at awareresearch.com Sat Jan 9 00:35:36 2010 From: aware at awareresearch.com (Aware) Date: Fri, 8 Jan 2010 16:35:36 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <892247.48619.qm@web36508.mail.mud.yahoo.com> References: <892247.48619.qm@web36508.mail.mud.yahoo.com> Message-ID: On Fri, Jan 8, 2010 at 3:57 PM, Gordon Swobe wrote: > --- On Fri, 1/8/10, Aware wrote: > >> But there's no understanding in the human brain, >> either, nor any evidence for it, EXCEPT FOR THE REPORTS OF AN >> OBSERVER. > > And what do you think that observer is reporting about, Jef? > > Without your self-reported observations of your own understanding, you would lack intentionality. You would have only weak AI. > >> We don't have understanding in our brains, but we >> don't need it. > > Absurd. I suppose you think your sentence above has meaning, and that you understand it. I suppose also that if I removed certain parts of your brain or impaired it with drugs then you would cease to understand it. Sure seems to me that you understand words in your brain. It's ironic how you strip everything I wrote down to a little sound bite--removing the context--so you can point to it and call it absurd. Ironic because appreciating the importance and role of context is key to resolving your puzzle. Have fun. My holiday visit to Extropy-chat has about run its course. I've been well-reminded that there's little benefit to continued participation so I'll simply remain Aware in the background and perhaps check in with you later. - Jef From stathisp at gmail.com Sat Jan 9 04:41:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 15:41:56 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <366490.44362.qm@web36508.mail.mud.yahoo.com> References: <366490.44362.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/9 Gordon Swobe : >> However, Sam will truly understand what he is saying while Cram will >> behave as if he understands what he is saying and believe that he >> understands what he is saying, without actually >> understanding anything. Is that right? > > He will behave outwardly as if he understands words but he will not "believe" anything. He will have weak AI. The patient was not a zombie before the operation, since most of his brain was functioning normally, so why would he be a zombie after? Before the operation he sees that people don't understand him when he speaks, and that he doesn't understand them when they speak. He hears the sounds they make, but it seems like gibberish, making him frustrated. After the operation, whether he gets the p-neurons or the c-neurons, he speaks normally, he seems to understand things normally, and he believes that the operation is a success as he remembers his difficulties before and now sees that he doesn't have them. Perhaps you see the problem I am getting at and you are trying to get around it by saying that Cram would become a zombie. But by what mechanism would the replacement of only a few neurons negate the consciousness of the rest of the brain? -- Stathis Papaioannou From rafal.smigrodzki at gmail.com Sat Jan 9 06:44:42 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 9 Jan 2010 01:44:42 -0500 Subject: [ExI] ancap propaganda Message-ID: <7641ddc61001082244g374e2998w37aa5b742f878f62@mail.gmail.com> I wrote a way-too-long post on anarchocapitalism (under the name polycentric law, order and defense) http://triviallyso.blogspot.com/2010/01/of-beating-hearts-part-2.html Comments welcome. Rafal From stathisp at gmail.com Sat Jan 9 10:00:01 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 21:00:01 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> Message-ID: 2010/1/9 Stefano Vaj : > 2010/1/8 Stathis Papaioannou : >> At first glance it looks coherent. I really would like to know before >> installing such a machine in my head to replace my failing neurons >> whether my consciousness, whatever it is, will remain intact. > > As long as you "wrongly" believe to be conscious after such install, > what difference does it make? > > Not only "practical" difference, mind, even *merely logical* difference... > > Everybody is a Dennett's zimbo, nobody is... I agree with you, of course. The zombie neurons are just as good as real neurons in every respect, so there is really no basis in distinguishing them as zombie neurons. But this needs to be carefully explained; if it were obvious, Gordon would not still be arguing. -- Stathis Papaioannou From bbenzai at yahoo.com Sat Jan 9 17:41:11 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 9 Jan 2010 09:41:11 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <298569.9122.qm@web113602.mail.gq1.yahoo.com> In 'Are We Spiritual Machines?: Ray Kurzweil vs. the Critics of Strong AI', John Searle says: "Here is what happened inside Deep Blue. The computer has a bunch of meaningless symbols that the programmers use to represent the positions of the pieces on the board. It has a bunch of equally meaningless symbols that the programmers use to represent options for possible moves." This is a perfect example of wny I can't take the guy seriously. He talks about 'meaningless' symbols, then goes on to describe what those symbols mean! He is *explicitly* stating that two sets of symbols represent positions on a chess board, and options for possible moves, respectively, while at the same time claiming that these symbols are meaningless. wtf? Does he even read what he writes? I'm baffled that anyone can write something like this, and not immediately delete it, looking round in embarrassment hoping that nobody saw it. What kind of 'meaning' can a symbol have, other than what it represents in the context in which it appears? If, in Deep Blue, the number hA45 stored in a particular memory location is used to represent "White Queen on square 4:6", that is its meaning to Deep Blue. Just as a specific pattern of neuron firings in a certain part of my brain represents the taste of chocolate ice-cream. If that's a 'meaningless symbol', then I must think using meaningless symbols. It seems to work just fine, so if "meaningless" (in this context) means anything, it's irrelevant to the functioning of at least one mind. Maybe I'm a zombie! (If a zombie realises it's a zombie, does that mean it's cured?) Ben Zaiboc From gts_2000 at yahoo.com Sat Jan 9 18:11:55 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 9 Jan 2010 10:11:55 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <606713.51772.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/8/10, Stathis Papaioannou wrote: >> He will behave outwardly as if he understands words >> but he will not "believe" anything. He will have weak AI. > > The patient was not a zombie before the operation, since > most of his brain was functioning normally, so why would he be a zombie > after? To believe something one must have an understanding of the meaning of the thing believed in, and I have assumed from the beginning of our experiment that the patient presents with no understanding of words, i.e., with complete receptive aphasia from a broken Wernicke's. I don't believe p-neurons will cure his aphasia subjectively, but I think his surgeon will eventually succeed in programming him to behave outwardly like one who understands words. After leaving the hospital, the patient might tell you he believes in Santa Claus, but he won't actually "believe" in it; that is, he won't have a conscious subjective understanding of the meaning of "Santa Claus". > Before the operation he sees that people don't understand > him when he speaks, and that he doesn't understand them when they > speak. He hears the sounds they make, but it seems like gibberish, making > him frustrated. After the operation, whether he gets the > p-neurons or the c-neurons, he speaks normally, he seems to understand > things normally, and he believes that the operation is a success as he > remembers his difficulties before and now sees that he doesn't have > them. Perhaps he no longer feels frustrated but still he has no idea what he's talking about! > Perhaps you see the problem I am getting at and you are > trying to get around it by saying that Cram would become a zombie. I have only this question unanswered in my mind: "How much more complete of a zombie does Cram become as a result of the surgeon's long and tedious process of reprogramming his brain to make him seem to function normally despite his inability to experience understanding? When the surgeon finally finishes with him such that he passes the Turing test, will the patient even know of his own existence?" -gts From max at maxmore.com Sat Jan 9 18:22:42 2010 From: max at maxmore.com (Max More) Date: Sat, 09 Jan 2010 12:22:42 -0600 Subject: [ExI] Avatar: misanthropy in three dimensions Message-ID: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Avatar: misanthropy in three dimensions http://www.spiked-online.com/index.php/site/earticle/7895/ -- Comments from anyone who has seen the movie? (I haven't yet.) Max From jonkc at bellsouth.net Sat Jan 9 18:48:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 9 Jan 2010 13:48:33 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46B6DF.50504@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> Message-ID: <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> On Jan 7, 2010, Damien Broderick wrote: > I can't easily imagine this being acceptable on ExIchat if someone was trying to laugh away/explain away results of professional stem cell work If a high school dropout who worked as the bathroom attendant at the zoo had a website and claimed to have made a major discovery about stem cells from an experiment described on that website I would not bother to read it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 9 18:21:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 9 Jan 2010 13:21:56 -0500 Subject: [ExI] Psi. (no need to read this post you already know what itsays ) In-Reply-To: <4B4782C4.5010408@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <4B464EEF.1000200@satx.rr.com> <6EA6CEF2-439B-49A1-BBC8-0ACA23256DF2@bellsouth.net> <4B4782C4.5010408@satx.rr.com> Message-ID: <6ACAEFFA-771E-443B-8517-22914D2F560C@bellsouth.net> On Jan 8, 2010 Damien Broderick wrote: >> At the start of every year for the last 10 years I've made a paranormal >> prediction for the coming year; I've predicted that a positive Psi (or >> ESP or spiritualism) article will NOT appear in Nature or Science or >> Physical Review Letters for the next year > > No, you've made a sociologically astute prediction based on your implicit knowledge of the reigning paradigm prejudices of those journals. So you think the reason these 3 journals (and I could easily extend my prediction to several dozen respectable journals) don't publish Psi stuff has all to do with sociology and nothing to do with Science. I think they don't publish Psi papers because what they contain conflicts with reality. On my side I have journals that have published every major scientific discovery of the 20'th century, on your side you have some bozo nobody ever heard of who typed some stuff onto a website or onto a dead tree that also nobody has ever heard of. > This happens routinely in science Not like this it doesn't! Sure on a few rare occasions somebody comes up with a correct idea that wasn't fully accepted for a long time, the most extreme example of that I can think of is continental drift, but even when it was in the minority the support for it never dropped to zero in the scientific community as it has for Psi. And the truth is that the evidence Wegener gave to support his theory was pretty weak and would remain weak until the 1960's. When the evidence did become good the Journals I mention didn't refuse to print it, far from it, they competed madly with each other to be the first to publish more about this wonderful new discovery. If the confirmation of Psi became as strong as it was for continental drift was in the 60's the same thing would happen, but that didn't happen last year, it won't happen next year, it won't happen next decade and it won't happen next century. And for every Wegener who was incorrectly labeled a crackpot there were tens of thousands who really were crackpots. Damien, we last went into this Psi stuff a couple of years ago, in that time objects 13 billion light years away have been observed, microprocessors have become 5 or 6 times as powerful, Poincar? conjecture was proven and the genome of hundreds of organisms have been sequenced; and what advances has the science of Psi achieved in that time? Zero, zilch, goose egg. Well over a century ago, long before the discovery of Quantum Mechanics or Relativity and even before Evolution and the Electromagnetic Theory of Light were generally accepted, people were saying Science was too hidebound to accept the existence of Psi and they are saying the exact same thing today as I'm certain they will be saying next century. > An example discussed recently in the NYT, IIRC, was a woman whose work on epigenetic effects was dropped scornfully into the trash bin in front of her by a senior scientist she showed it to, hindering her work for many years. Did it hinder her work for centuries? And if it was me I would have made a copy. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Jan 9 18:54:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 9 Jan 2010 10:54:11 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: <298569.9122.qm@web113602.mail.gq1.yahoo.com> Message-ID: <575831.58228.qm@web36504.mail.mud.yahoo.com> --- On Sat, 1/9/10, Ben Zaiboc wrote: > In 'Are We Spiritual Machines?: Ray > Kurzweil vs. the Critics of Strong AI', John Searle says: > > "Here is what happened inside Deep Blue. The computer has a > bunch of meaningless symbols that the programmers use to > represent the positions of the pieces on the board. It has a > bunch of equally meaningless symbols that the programmers > use to represent options for possible moves." > > > This is a perfect example of why I can't take the guy > seriously.? He talks about 'meaningless' symbols, then > goes on to describe what those symbols mean! He is > *explicitly* stating that two sets of symbols represent > positions on a chess board, and options for possible moves, > respectively, while at the same time claiming that these > symbols are meaningless.? wtf?? Human operators ascribe meanings to the symbols their computers manipulate. Sometimes humans forget this and pretend that the computers actually understand the meanings. It's an understandable mistake; after all it sure *looks* like computers understand the meanings. But then that's what programmers do for a living: we program dumb machines to make them look like they have understanding. The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" And Searle's answer is: "It won't happen from running formal syntactical programs on hardware as we do today, because computers and their programs cannot and will never get semantics from syntax. We'll need to find another way. And we know we can do it even if we don't yet know the way. After all nature did it in these machines we call humans." -gts From sparge at gmail.com Sat Jan 9 18:56:02 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 9 Jan 2010 13:56:02 -0500 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: On Sat, Jan 9, 2010 at 1:22 PM, Max More wrote: > > -- Comments from anyone who has seen the movie? (I haven't yet.) He's got the facts right, and clearly the movie isn't intended to paint the human race as faultless, but I think he's making too much out of it. This is primarily entertainment, not propaganda. It's fantasy. But it does mirror certain historical events. Wanting to be something better than human is a basic Extropian notion but the reviewer seems to think that it's akin to heresy. I definitely recommend that everyone catch Avatar in 3D in a theater. It's stunning--not just the CGI, which is fantastic, but also the human creativity that imagined it and brought it to life. It's a familiar story, but it's told very well and in a novel setting. -Dave From jonkc at bellsouth.net Sat Jan 9 18:37:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 9 Jan 2010 13:37:33 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46A73E.2090200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> Message-ID: <5F3842E7-3035-4CCD-90FC-FC44F1FFD539@bellsouth.net> On Jan 7, 2010, Damien Broderick wrote: > If you're assuming a lack of probity in the experimenter, why not just say the whole thing was scripted or made up and have done with it? That sounds like a fine idea to me. > This is the bottom line with most skeptical retorts. What would it take to dispose of this canard? If the Psi miracle came from somebody with a reputation for being an outstanding experimental scientist then I would start to get interested; if his extraordinary results were duplicated by other experimentalists that I respected then I would be convinced; this hasn't happened yet, it hasn't even come close to happening. And no ASCII sequence posted on a obscure website claiming an experimental breakthrough could do that. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 9 19:20:58 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 13:20:58 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> Message-ID: <4B48D71A.6010806@satx.rr.com> On 1/9/2010 12:48 PM, John Clark wrote: > If a high school dropout who worked as the bathroom attendant at the zoo > had a website and claimed to have made a major discovery about stem > cells from an experiment described on that website I would not bother to > read it. Neither would I, probably. When a biochemistry PhD and Research Fellow of the Royal Society, like Sheldrake, does so, I'd be less quick to dismiss his scientific report. What are your equivalent credentials, John? (Not that this is an important point, and verges on the ad hominem, but you're the one who keeps introducing imaginary and demeaning dropouts in trailer parks as the supposed source of everything you dismiss.) Damien Broderick From gts_2000 at yahoo.com Sat Jan 9 19:54:47 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 9 Jan 2010 11:54:47 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <855333.87547.qm@web36505.mail.mud.yahoo.com> Stathis, You have mentioned on a couple of occasions that you think I must believe that the brain does something that does not lend itself to computation. I made a mental note to myself to try to figure out why you say this. I had planned to go through your messages again, but instead I'll try to address what I think you may have meant. Assume we know everything we can possibly know about the brain and that we use that knowledge to perfectly simulate a conscious brain on a computer. Even though I believe everything about the brain lends itself to computation, and even though I believe our hypothetical simulation in fact computes everything possible about a real conscious brain, I still also say that our simulation will have no subjective experience. Perhaps you want to know how can I say this without assigning some kind of strange non-computable aspect to natural brains. You may want to know how I can say this without asserting mind/matter duality or some other mystical concept to explain subjective experience. Understandable questions. The answer is that I say it because I don't believe the brain is actually computer. Some people seem to think that if we can compute X on a computer then a computer simulation of X must equal X. But that's just a blatant non sequitur. -gts From msd001 at gmail.com Sat Jan 9 20:11:10 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 9 Jan 2010 15:11:10 -0500 Subject: [ExI] Meaningless Symbols In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> On Sat, Jan 9, 2010 at 1:54 PM, Gordon Swobe wrote: > The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" How can we make people actually understand the meanings and not merely appear to understand the meanings? To the degree that it/you/anyone serves my purpose, I don't care what it/you/they "understand" as long as the appropriate behavior is displayed. Why is that so difficult to grasp? When I tell me dog "Sit" and it sits down, I don't need to be concerned about the dog's qualia or a platonic concept of sitting - only that the dog does what I want. If I tell a machine to find a worthy stock for investment, that's what I expect it should do. I would even be happy to entertain follow-up conversation with the machine regarding my opinion of "worthy" and my long-term investment goals - just like I would expect with a real broker or investment advisor. At another level of conversational interaction with proposed AGI, it might start asking me novel questions for the purpose of qualifying its model of my expected behavior. At that point, how can any of us declare that the machine doesn't have 'understanding' of the data it manages? Why would we? Understanding is highly overrated. Many people stumble through their lives with only a crude approximation of what is going on around them - and it works. From thespike at satx.rr.com Sat Jan 9 20:13:39 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 14:13:39 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <855333.87547.qm@web36505.mail.mud.yahoo.com> References: <855333.87547.qm@web36505.mail.mud.yahoo.com> Message-ID: <4B48E373.9030607@satx.rr.com> On 1/9/2010 1:54 PM, Gordon Swobe wrote: > how can I say this without assigning some kind of strange non-computable aspect to natural brains. > The answer is that I say it because I don't believe the brain is actually [a] computer. Isn't that exactly saying that you assign some kind of non-computable aspect to natural brains? (No reason why it should be strange, though.) As I said several days ago, a landslide doesn't seem to me to compute the trajectories of all its particles--at least not in any sense that I'm familiar with. We can *model* the process with various degrees of accuracy using equations, but it looks like a category mistake to suppose that the nuclear reactions in the sun are *calculating* what they're doing. I realize that Seth Lloyd and others disagree (or I think that's what he's saying in PROGRAMMING THE UNIVERSE--that the universe is *calculating itself*) but the whole idea of calculation seems to me to imply a compression or reduction of the mapping of some aspects of one large unwieldy system onto another extremely stripped-down toy system. That might be wrong, I know. I hope Gordon knows it might be wrong as well. Damien Broderick From emlynoregan at gmail.com Sat Jan 9 20:51:17 2010 From: emlynoregan at gmail.com (Emlyn) Date: Sun, 10 Jan 2010 07:21:17 +1030 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> 2010/1/10 Max More : > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) > > Max Slight spoilers ahead, although spoilers don't really matter with regard to this movie. First, see the movie, in 3D. Disregard all the reviews, and just go see it. When people say "the effects are great", they are seriously underselling. What makes Avatar such a monumental achievement is the animated fantasy world (Pandora), which outstrips anything we've seen before in terms of sheer visual lushness, creativity, imagination, beauty. Clearly, it's the best work of the best people using the best technology money can buy. The story could have been about how great it was to club baby seals, and everyone would still rave about it. See it, and be proud to live in 2010. (A slight amendment to this; my wife was unimpressed with it. She's not really a visual person, she's primarily aural mode. There's nothing in this movie for aural people) Regarding the story, you really can't read as much into it as this reviewer. It's skeletal. I think really there's so much meat on the bones of the setting, the visual environment, that they just couldn't afford to also tell a sophisticated story; you'd just not be able to take it in. Also, it's a movie designed for mass appeal, so the story is intentionally dumbed down. Even so, it's too much for some people; a friend was telling me that he sat in front of a woman who kept going "who's that, why are they doing that?" all the way through; she couldn't understand the correspondence between the humans and their avatars. The genre is fantasy action movie. It's got a science fiction setting, including elements with amazing potential, and mostly wastes all of that. As an action movie, it needs that coarse "here's the hero, here's the problem, here's the point of no return, now let's fight". The plot itself, as many have said, is Pocahontas / Dances with Wolves / The Last Samurai, etc, except that the native people win in the end. In fact, I'd say the biggest foil to that reviewer's complaint about the misanthropy is that you can't really believe the ending, because we know from our history that it just doesn't work that way - I imagined them being nuked from orbit 5 minutes after the end of the film. The misanthropy charge generally; it misses the point. In all the films that the reviewer mentions (and films like it), the point is not that humans are bad, and other stuff is good. It's more that the class of things we think of as people is larger than just those who look like and are encultured like us. To get that point across, the story tellers juxtapose people unlike us, with people like us, making the former the good guys and the latter the bad guys, to say to us that we should judge people by their behaviour, not by their tribal affiliations. It's a very straightforward left-oriented message (vs the right's intuitions about kinship, duty, loyalty, which are all about in-group). But generally it's a bit embarrassing to be overly offended or enthused by this story. It's just not got enough substance for that. Complain about the lack of sophistication (Movie in 3D, story in 2D), but the politics? Really? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From olga.bourlin at gmail.com Sat Jan 9 21:19:16 2010 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Sat, 9 Jan 2010 13:19:16 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: I haven't see Avatar either, but this plot sounds oh-so-familiar (and the last paragraph is significant): http://www.nytimes.com/2010/01/08/opinion/08brooks.html Patrick and I are movie fans, and "entertainment" is only one aspect of what we find interesting about movies. What movies reveal to us about the time in which they were produced (by which director and from what country) is what's often the more fascinating tale. Olga On Sat, Jan 9, 2010 at 10:22 AM, Max More wrote: > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) > > Max > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Jan 9 21:45:48 2010 From: pharos at gmail.com (BillK) Date: Sat, 9 Jan 2010 21:45:48 +0000 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4B48E373.9030607@satx.rr.com> References: <855333.87547.qm@web36505.mail.mud.yahoo.com> <4B48E373.9030607@satx.rr.com> Message-ID: On 1/9/10, Damien Broderick wrote: > Isn't that exactly saying that you assign some kind of non-computable > aspect to natural brains? (No reason why it should be strange, though.) As I > said several days ago, a landslide doesn't seem to me to compute the > trajectories of all its particles--at least not in any sense that I'm > familiar with. We can *model* the process with various degrees of accuracy > using equations, but it looks like a category mistake to suppose that the > nuclear reactions in the sun are *calculating* what they're doing. I realize > that Seth Lloyd and others disagree (or I think that's what he's saying in > PROGRAMMING THE UNIVERSE--that the universe is *calculating itself*) > but the whole idea of calculation seems to me to imply a compression or > reduction of the mapping of some aspects of one large unwieldy system > onto another extremely stripped-down toy system. > > That might be wrong, I know. I hope Gordon knows it might be wrong as well. > > I think what Gordon might be trying to say is that the brain is not a *digital* computer. Digital computers separate data and program. The brain is more like an analogue computer. It is not like a digital computer that runs a program stored in memory. The brain *is* the program and *is* the computer. And it is a constantly changing analogue computer as it grows new paths and links. There are no brain programs that resemble computer programs stored in a coded format since all the programming and all the data is built into neuronal networks. If you want to get really complicated, you can think of the brain as multiple analogue computers running in parallel, processing different functions, all growing and changing and passing signals between themselves. This modular parallel design is what causes 'consciousness' to be generated as a sort of synthesis product of all the lower-level modules. The digital computers we have today may not be able to do this. We may need a new generation of a different kind of computer to generate this 'consciousness'. It is a different question whether we need this 'consciousness' in our intelligent computers. BillK From spike66 at att.net Sat Jan 9 21:49:48 2010 From: spike66 at att.net (spike) Date: Sat, 9 Jan 2010 13:49:48 -0800 Subject: [ExI] happy memories and rising tides Message-ID: I had a number of happy memories rush back today, since I was in downtown San Jose at the McHenry Convention Center at a car show. I remembered an extro schmooze we had down there, or right next to it at Hilton if I recall correctly. It must have been in the summer of 01, because I do recall several of us commenting at lunch how we missed Sasha Chislenko and how sad it was he was no longer with us. Is it not amazing he has been gone nearly ten years already? In contrast to that, I have an optimistic note to sound in this post. I moved to the San Jose area about 21 years ago. At that time, the area of town where the McHenry Center now stands is one where you wouldn't hang out, day or night. A mangy junkyard dog would watch his step down there. But by 2001, a revival had been taking place. There were plenty of safer looking areas where one could walk, especially if they were to hang out in clots of 4 or 5 geeks. The neighborhood was a bit spotty, but not bad really. Those who were there, comments welcome, affirming or contradicting. I do recall commenting at the time to stay on the north side of the freeway, for that adjoining neighborhood just two minutes walk underneath that underpass was bad news indeed. But as I recall, the extropians found adequate sustenance and I do not recall anyone having felt threatened. We did suggest people stay indoors after dark if staying at the Hilton. So I was down there this morning at the car show. I marvelled at how nice everything was down there, clean, new, fixed up, nice, nothing at all that looked the least bit dangerous. So I walked around, and found it the same all around there, much better even than it was in 2001. So I decided to risk disappointment and check out on the other side of that freeway. I wasn't disappointed at all! That area is looking waaay better than I ever recall seeing it. Of course it is still lower end housing, lots of ancient dwellings, some fifty or more years old, still standing by some mysterious means after all these years. Perhaps they were built by the same guys who built the Anastasi cliff dwellings. But they were tidy and making an attempt at actual lawns and gardens in many of the houses. There was nothing over there that looked scary. Really! This was a great contrast to the way it appeared 20 years ago. In those days, if one should stumble into that area, one's chances of escaping alive were negligible. One might as well not even bother trying, but rather just save everyone a lot of time and effort, and just pull into the local mortuary, pick out one's favorite pine box, hand them a credit card, climb in and close the lid after oneself. I myself blundered into the area once, only to escape by sheer miracle. But it isn't that way now. I saw little that scared me at all. The rising tide has raised all boats in San Jose. I miss Sasha. May his memory live on forever. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 9 21:48:13 2010 From: jonkc at bellsouth.net (john clark) Date: Sat, 9 Jan 2010 13:48:13 -0800 (PST) Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B48D71A.6010806@satx.rr.com> Message-ID: <630266.35889.qm@web180203.mail.gq1.yahoo.com> On Sat, 1/9/10, Damien Broderick wrote: > What are your equivalent credentials, John? ? Irrelevant, I have not presented experimental results that you are supposed to believe. ? ? John?K Clark? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Sat Jan 9 21:47:43 2010 From: moulton at moulton.com (moulton at moulton.com) Date: 9 Jan 2010 21:47:43 -0000 Subject: [ExI] Avatar: misanthropy in three dimensions Message-ID: <20100109214743.12401.qmail@moulton.com> On Sat, 2010-01-09 at 12:22 -0600, Max More wrote: > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ To me the review is fundamentally confused. First the review mistakenly over identifies some of the humans in the movie with humanity as a whole. Second the reviewer seems to be either dishonest or ignorant when writing: First, the miners and their mercenaries embark upon genocide with no thought whatsoever, despite the fact that humanity has considered genocidal behaviour to be a bad thing for some time now. This allows Avatar to imply that man has not changed since the explorations and conquests of the Middle Ages. Yes much of humanity has come to consider genocidal behavious to be a "bad thing" however this consideration is not universal. Remember that in recent history there were humans who engaged in genocide in the heart of civilized Europe just about seven decades ago. And if the reviewer replies "oh that was long ago" then I suggest the reviewer consider Darfur. And further it is simply false to say that the movie Avatar implies "man has not changed since the explorations and conquests of the Middle Ages." It appears to me that the reviewer had an ax to grind and let ideological fervor overwhelm accuracy and coherency. For a humorous spoof I suggest at quick glance: http://images.huffingtonpost.com/gen/130283/original.jpg > -- Comments from anyone who has seen the movie? (I haven't yet.) Yes I saw the movie and as others have commented it is visually stunning. The 3-D is well done and not used gratuitously. The animation and effects are well integrated. If only they had devoted 10% of the visual budget on a decent script and if they had avoided some poorly done gimmicks in an attempt to do quick character development. I cringed at the cigarette scene at the beginning of the movie. It was so amateurish. The big battle scene was almost a self parody. One way to get through the movie is to every five minutes predict in you mind the next five minutes of the movie which is not difficult given how formulaic the movie is. After the movie one member of the group I was with made the comment after listing to various criticisms: "Remember the target audience is twelve years olds". So I suggest seeing it in 3D for the visuals but do not expect much from the story. I hope this comments are helpful. Fred From gts_2000 at yahoo.com Sat Jan 9 22:14:55 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 9 Jan 2010 14:14:55 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4B48E373.9030607@satx.rr.com> Message-ID: <483875.29296.qm@web36507.mail.mud.yahoo.com> --- On Sat, 1/9/10, Damien Broderick wrote: >> how can I say this without assigning some kind of >> strange non-computable aspect to natural brains. >> The answer is that I say it because I don't believe >> the brain is actually a computer. > > Isn't that exactly saying that you assign some kind of > non-computable aspect to natural brains? No, I think consciousness will likely turn out to be just a state that natural brains can enter not unlike water can enter a state of solidity. Nothing strange or dualistic or non-physical or non-computable about it! But the computer simulation of it won't have consciousness any more than will a simulation of an ice cube have coldness. Computer simulations of things do not equal the things they simulate. (I wish I had a nickel for every time I've said that here :-) A computer simulation of a brain *would* however equal a brain in the special case that natural brains do in fact exist as computers. However real brains have semantics and it looks to me like real computers do not and cannot, so I do not equate natural brains with computers. The computationalist theory of mind seems like a nifty idea, but I think it does not compute. -gts From stefano.vaj at gmail.com Sat Jan 9 22:21:51 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 9 Jan 2010 23:21:51 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> Message-ID: <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> 2010/1/9 Stathis Papaioannou : > I agree with you, of course. The zombie neurons are just as good as > real neurons in every respect, so there is really no basis in > distinguishing them as zombie neurons. But this needs to be carefully > explained; if it were obvious, Gordon would not still be arguing. I suspect that Gordon is simply a dualist, with emotional reasons to think that conscience must be something "special" and different from the mere phenomena it is used to describe. This is not a rare attitude, including amongst people with no overtly metaphysical penchant, but I think that no amount of science can make them to change their views, which are fundamentally philosophical in nature. More interesting, if one is not willling to go down this way, I think one has ultimately to recognise that "conscience" is ultimately a (rather elusive) social construct, and that organic brains have not really much to do with it one way or another. Meaning that nothing they are neither necessary nor sufficient to exhibit the behaviours required to allow us to engage in processes of projection and identification. -- Stefano Vaj From spike66 at att.net Sat Jan 9 22:38:11 2010 From: spike66 at att.net (spike) Date: Sat, 9 Jan 2010 14:38:11 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <20100109214743.12401.qmail@moulton.com> References: <20100109214743.12401.qmail@moulton.com> Message-ID: > On Behalf Of moulton at moulton.com > ... > "Remember the target audience is twelve years olds". ... > > I hope this comments are helpful... Fred Very much so, thanks Fred. Today's 12 yr olds have had avatar based video games their entire lives. I didn't play one until 1986. Fred you and I would have been in our mid 20s by then. spike From thespike at satx.rr.com Sat Jan 9 22:41:55 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 16:41:55 -0600 Subject: [ExI] conscience In-Reply-To: <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> Message-ID: <4B490633.8070900@satx.rr.com> On 1/9/2010 4:21 PM, Stefano Vaj wrote: > More interesting, if one is not willling to go down this way, I think > one has ultimately to recognise that "conscience" is ultimately a > (rather elusive) social construct, and that organic brains have not > really much to do with it one way or another. This is a curious lexical error that non-English writers often make in English, so I assume there must be only a single word in their languages for the two very different concepts "conscience" ("the inner sense of what is right or wrong in one's conduct or motives, impelling one toward right action") and "consciousness" ( "the state of being conscious; awareness of one's own existence, sensations, thoughts, surroundings, etc."). I dimly recall that this is so in French. If so, how do you convey the difference in Italian, etc? Damien Broderick From scerir at libero.it Sat Jan 9 22:53:48 2010 From: scerir at libero.it (scerir) Date: Sat, 9 Jan 2010 23:53:48 +0100 (CET) Subject: [ExI] R: conscience Message-ID: <20823311.379841263077628333.JavaMail.defaultUser@defaultHost> in all conscience these are beautiful pictures http://hirise.lpl.arizona.edu/katalogos.php From pharos at gmail.com Sat Jan 9 23:12:36 2010 From: pharos at gmail.com (BillK) Date: Sat, 9 Jan 2010 23:12:36 +0000 Subject: [ExI] conscience In-Reply-To: <4B490633.8070900@satx.rr.com> References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <4B490633.8070900@satx.rr.com> Message-ID: On 1/9/10, Damien Broderick wrote: > This is a curious lexical error that non-English writers often make in > English, so I assume there must be only a single word in their languages for > the two very different concepts "conscience" ("the inner sense of what is > right or wrong in one's conduct or motives, impelling one toward right > action") and "consciousness" ( "the state of being conscious; awareness of > one's own existence, sensations, thoughts, surroundings, etc."). I dimly > recall that this is so in French. If so, how do you convey the difference in > Italian, etc? > > You are correct that the same word 'conscience' has multiple meanings in French. See: Quote: Il est important de distinguer : * La conscience en tant que ph?nom?ne mental li? ? la perception et la manipulation intentionnelle de repr?sentations mentales, qui comprend : 1. la conscience du monde qui est en relation avec la perception du monde ext?rieur, des ?tres vivants dou?s ou non de conscience dans l?environnement et dans la soci?t? (autrui). 2. la conscience de soi et de ce qui se passe dans l?esprit d?un individu : perceptions internes (corps propre), aspects de sa personnalit? et de ses actes (identit? du soi, op?rations cognitives, attitudes propositionnelles). * La conscience morale, respect de r?gles d'?thique. Le terme conscience est donc susceptible de prendre plusieurs significations, selon le contexte. Translation: It is important to distinguish: * Consciousness as a mental phenomenon linked to the perception and the deliberate manipulation of mental representations, which includes: 1. consciousness of the world that is related to the perception of the world outside, living beings endowed with conscience or not in the environment and the society (others). 2. self-consciousness and what happens in the mind of an individual: perceptions of internal (own body), aspects of his personality and his deeds (identity of the self, operations cognitive, propositional attitudes). * Consciousness morality, respect for rules of ethics. The term consciousness is likely to take several meanings depending on context. ------------------- So the French can make the distinction by saying 'La conscience morale' when they mean 'conscience'. BillK From thespike at satx.rr.com Sat Jan 9 23:27:30 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 17:27:30 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <630266.35889.qm@web180203.mail.gq1.yahoo.com> References: <630266.35889.qm@web180203.mail.gq1.yahoo.com> Message-ID: <4B4910E2.7060300@satx.rr.com> On 1/9/2010 3:48 PM, john clark wrote: > > What are your equivalent credentials, John? > Irrelevant, I have not presented experimental results that you are > supposed to believe. Okay. As my dear old departed Mum used to say, "A cat may look at a king." But of course you snipped out the real point of that post, which was Dr. Sheldrake's credentials, very far from the straw yokel you always push at us. You will probably reply (if you've bothered looking Sheldrake up): "Oh that idiot--he believes all sort of mad BULLSHIT, and look, Nature's editor said his first book should be burned!" My response: I also find some of Sheldrake's theories over the top or silly, but (1) that has nothing to do with his experiments, and (2) he seems to have been driven toward them by experimental results that standard science hasn't accounted for, or just denies. So we're back in the trap I mentioned before: if a scientist with a solid background speaks up for psi, it *means* he's a lunatic/gullible/lying etc, so you don't need to consider anything further that he says. Meanwhile, your list of drooling backwoods cretins includes: Dr. Edwin May, PhD in experimental nuclear physics at UC Davis, long-time scientific director of the long-classified Star Gate program funded for some 20 years *on an annual basis, requiring scientific board oversight and approval* by its government sponsors. Dr. May is a friend of mine; I have read much of his work since it was declassified, and I trust him. Dr. Dean Radin, masters in electrical engineering and a PhD in psychology from the University of Illinois, Champaign-Urbana. For a decade he worked on advanced telecommunications R&D at AT&T Bell Laboratories and GTE Laboratories. Professor Robert Jahn, former Dean of the department of Mechanical and Aerospace Engineering, School of Engineering/Applied Science, Princeton University. Dr. Roger Nelson, PhD in experimental cognitive psychology, long-time member of the Princeton University PEAR team. Dr. Stanley Krippner, B.S University of Wisconsin, Madison, WI, MA and PhD Northwestern University, Evanston, IL. I could go on at some length. Most parapsychologists these days have their work card from the academy. So why aren't their papers published in Nature and Science? Suppose stem cell papers were routinely sent for review to Jesuits at the God Hates Abortion Institute at Notre Dame, or to the Tribophysics Dept at Columbia. Why, the referees mutter, this is BULLSHIT or wicked, or worse, you think we're going to waste our time on such nonsense? Reject! (I've read some such referee reports, such as one from Science; they are shamefully empty of critique.) Btw, how many papers in perceptual psychology are published in Nature? I don't know, maybe quite a few. Neuroscience might allow psi in, but it'd be a squeeze. My impression is that Nature tends to focus on physics, cosmology, cell biology, genetics, genomics, etc.** Damien Broderick **eg: # Nature # Nature Biotechnology # Nature Cell Biology # Nature Chemical Biology # Nature Chemistry # Nature Clinical Practice Journals # Nature Communications # Nature Digest # Nature Genetics # Nature Geoscience # Nature Immunology # Nature Materials # Nature Medicine # Nature Methods # Nature Nanotechnology # Nature Neuroscience # Nature Photonics # Nature Physics # Nature Protocols # Nature research journals # Nature Reviews journals # Nature Reviews Cancer # Nature Reviews Cardiology (formerly Nature Clinical Practice Cardiovascular Medicine) # Nature Reviews Clinical Oncology (formerly Nature Clinical Practice Oncology) # Nature Reviews Drug Discovery # Nature Reviews Endocrinology (formerly Nature Clinical Practice Endocrinology & Metabolism) # Nature Reviews Gastroenterology and Hepatology (formerly Nature Clinical Practice Gastroenterology and Hepatology) # Nature Reviews Genetics # Nature Reviews Immunology # Nature Reviews Microbiology # Nature Reviews Molecular Cell Biology # Nature Reviews Nephrology (formerly Nature Clinical Practice Nephrology) # Nature Reviews Neurology (formerly Nature Clinical Practice Neurology) # Nature Reviews Neuroscience # Nature Reviews Rheumatology (formerly Nature Clinical Practice Rheumatology) # Nature Reviews Urology (formerly Nature Clinical Practice Urology) # Nature Structural and Molecular Biology # Neuropsychopharmacology From stathisp at gmail.com Sat Jan 9 23:38:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 10:38:38 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/10 Gordon Swobe : > Human operators ascribe meanings to the symbols their computers manipulate. Sometimes humans forget this and pretend that the computers actually understand the meanings. > > It's an understandable mistake; after all it sure *looks* like computers understand the meanings. But then that's what programmers do for a living: we program dumb machines to make them look like they have understanding. > > The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" > > And Searle's answer is: "It won't happen from running formal syntactical programs on hardware as we do today, because computers and their programs cannot and will never get semantics from syntax. We'll need to find another way. And we know we can do it even if we don't yet know the way. After all nature did it in these machines we call humans." The meaning of the symbols in a computer program is arbitrary, assigned by the programmer or by the context if the computer is learning from the environment. But where does the meaning of symbols in brains come from? A child is told "dog" and shown a picture of a dog, so "dog" comes to mean dog. It's not as if "dog" has some God-given, absolute meaning which only brains can access. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 9 23:41:06 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 10:41:06 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> Message-ID: 2010/1/10 Mike Dougherty : > How can we make people actually understand the meanings and not merely > appear to understand the meanings? More to the point, what is the difference between real understanding and pseudo-understanding? If I can use a word appropriately in every context, then ipso facto I understand that word. -- Stathis Papaioannou From pharos at gmail.com Sat Jan 9 23:59:05 2010 From: pharos at gmail.com (BillK) Date: Sat, 9 Jan 2010 23:59:05 +0000 Subject: [ExI] conscience In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <4B490633.8070900@satx.rr.com> Message-ID: On 1/9/10, BillK wrote: > You are correct that the same word 'conscience' has multiple meanings > in French. > See: > > And a quick look in the Italian wikipedia shows that the same range of multiple meanings for 'coscienza' exists in Italian as well. I don't speak Italian, but it seems reasonable that Italians could say 'coscienza morale' when they mean conscience. BillK From lcorbin at rawbw.com Sun Jan 10 00:02:02 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 09 Jan 2010 16:02:02 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> Message-ID: <4B4918FA.8000708@rawbw.com> Emlyn writes > The plot itself, as many have said, is Pocahontas / Dances with Wolves > / The Last Samurai, etc, except that the native people win in the end. > In fact, I'd say the biggest foil to that reviewer's complaint about > the misanthropy is that you can't really believe the ending, because > we know from our history that it just doesn't work that way Yes. The ending was so "unbelievable" (in a thinking man's sense) that I never gave it a second thought. > - I imagined them being nuked from orbit 5 minutes after > the end of the film. Well, you sound as anti-human as the reviewer Steve Bremner accused Cameron of being. What you write is not at all how it would end, any more than the movie's version. It would really end with the corporation coming back to Pandora and "making it right" with whoever the leaders are. I.e., cutting them in on the action. That's of course how Chief Seattle got what was coming to him. Not in the made-up account, as exemplified by, in 1971, a totally fabricated speech by the Chief. See http://www.snopes.com/quotes/seattle.asp But you have to look hard on Google to find the truth that the chief's speech was made up.) Another thing so obvious in the movie that I didn't give it a second thought is that while the Pandorans had a great deal to give humans about biological knowledge (me, I'd estimate their flora and fauna to be about 500 million years more advanced than our Earth flora and fauna), the humans, on the other hand, clearly had a great deal of technology to share with the Pandorans. Thus a deal could be struck. Thus a deal *would* be struck... if you know anything at all about the history of trade. (I must add that Bernstein's new book "A Splendid Exchange" is the most limitlessly fascinating and informative book I can presently imagine about trade, and its impact on history.) > But generally it's a bit embarrassing to be overly offended or > enthused by this story. It's just not got enough substance for that. > Complain about the lack of sophistication (Movie in 3D, story in 2D), > but the politics? Really? I don't know. People growing up today, unless they're of an especially thoughtful variety, are surely being overwhelmed with all these anti-tech and anti-progress memes, just as the reviewer says. The media and the left-culture has already turned two generations of people against corporations (and by default therefore towards pro-government solutions), so what else is new. Yes, I'd agree, however, that the politics is not the first thing that strikes one about the movie. Lee From thespike at satx.rr.com Sun Jan 10 00:04:54 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 18:04:54 -0600 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> Message-ID: <4B4919A6.6050600@satx.rr.com> On 1/9/2010 5:41 PM, Stathis Papaioannou wrote: > More to the point, what is the difference between real understanding > and pseudo-understanding? If I can use a word appropriately in every > context, then ipso facto I understand that word. Not relevant. What you can do is exactly beside the point when discussing what robot systems can do. A good Google translation now can cough up a reliable translation from Hungarian (I know, I used it the other day to turn part of one of my human-translated papers back into English). It would be perverse to claim that the Google system understood the words being translated, even though the complex program was able to find appropriate English words and syntax. I understood it, the machine didn't. Damien Broderick From lcorbin at rawbw.com Sun Jan 10 00:06:00 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 09 Jan 2010 16:06:00 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions Message-ID: <4B4919E8.6070104@rawbw.com> Max More wrote: > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) That reviewer is more like me than I am! I thought I was "way out there" in siding with the Japanese bureaucracy and modern gatling-gun toting imperial forces against the unruly, backward, primitive, and fighting-is-all-they-know samurai, in the movie "The Last Samurai". But I humbly bow to the reviewer *Steve Bremner* who wrote about Avatar: "By the end of the film, this reviewer felt like rising to his feet and cheering the final human attack on the Na?vi. Indeed, much of the audience seemed ambivalent - we were clearly dazzled by the spectacular 3D effects and the beautiful rendering of the alien planet, but the unrelentingly bleak portrayal of humanity left everyone more than a little despondent as we left the cinema to celebrate the New Year." Wow. Incredible. Even *I* was on the side of the Navi by then. Perhaps my relative lack of offense, compared to good old Steve here, is that I'm probably a lot older than he is, and have just been resigned for far more decades to the inevitable depiction in Hollywood movies of modernity as evil, and the glorification of the noble savage. I liked the film very much, notwithstanding that every word Steve writes is true, absolutely true (so to speak). People like Cameron are nothing short of hypocrites, if you have the sense to follow Steve Bremner's clear insights and conclusions. Well... high time I read all the rest of those posts following Max's and see if I'm just echoing the chorus or not. Lee P.S. The reviewer did not mention that all the male characters of the Navi are *warriors*, and you can't get to be a warrior unless there is a lot of war going on. Again, so much for the glorification of the hunter/gatherer/warrior. From pharos at gmail.com Sun Jan 10 00:12:44 2010 From: pharos at gmail.com (BillK) Date: Sun, 10 Jan 2010 00:12:44 +0000 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <4B4918FA.8000708@rawbw.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> <4B4918FA.8000708@rawbw.com> Message-ID: On 1/10/10, Lee Corbin wrote: > I don't know. People growing up today, unless they're of > an especially thoughtful variety, are surely being overwhelmed > with all these anti-tech and anti-progress memes, just as > the reviewer says. The media and the left-culture has already > turned two generations of people against corporations (and > by default therefore towards pro-government solutions), so > what else is new. Yes, I'd agree, however, that the politics > is not the first thing that strikes one about the movie. > > Nit-pick. Yes, people have been turned against corporations and more to pro-government solutions. But the problem in the US is that the corporations have become the government. So the mass of the US people who are rapidly becoming the poor and / or unemployed have to find a different government to be pro-. BillK From stathisp at gmail.com Sun Jan 10 01:16:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 12:16:12 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <606713.51772.qm@web36508.mail.mud.yahoo.com> References: <606713.51772.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/10 Gordon Swobe : >> The patient was not a zombie before the operation, since >> most of his brain was functioning normally, so why would he be a zombie >> after? > > To believe something one must have an understanding of the meaning of the thing believed in, and I have assumed from the beginning of our experiment that the patient presents with no understanding of words, i.e., with complete receptive aphasia from a broken Wernicke's. I don't believe p-neurons will cure his aphasia subjectively, but I think his surgeon will eventually succeed in programming him to behave outwardly like one who understands words. > > After leaving the hospital, the patient might tell you he believes in Santa Claus, but he won't actually "believe" in it; that is, he won't have a conscious subjective understanding of the meaning of "Santa Claus". He has no understanding of words before the operation, but he still has understanding! If he sees a dog he knows it's a dog, he knows if it's a friendly dog or a vicious dog to be avoided, he knows that dogs have to eat and how to open a can of dog food, and so on - even though the word "dog" is incomprehensible to him. After the operation, whether it's Cram with the p-neurons or Sam with the c-neurons, when he hears the word "dog" he will get an image of a dog in his head, and he will think, "that must be what people meant when they were making sounds and pointing to a dog before!" If he is asked "how many legs does a dog which has lost one of its legs have?" he will get an image of a dog hobbling about on three legs and answer, "three"; and he will remember when he was a child and his own dog was run over by a car and lost one of its legs. So his behaviour in relation to language will be exactly the same whether he got the p-neurons or the c-neurons, and his cognitions, feelings, beliefs and understanding at least in the normal part of his brain will also be the same in either case. But you claim that Cram will actually have no understanding of "dog" despite all this. That is what seems absurd: what else could it possibly mean to understand a word if not to use the word appropriately and believe you know the meaning of the word? That's all you or I can claim at the moment; how do we know we don't have a zombified language centre? >> Before the operation he sees that people don't understand >> him when he speaks, and that he doesn't understand them when they >> speak. He hears the sounds they make, but it seems like gibberish, making >> him frustrated. After the operation, whether he gets the >> p-neurons or the c-neurons, he speaks normally, he seems to understand >> things normally, and he believes that the operation is a success as he >> remembers his difficulties before and now sees that he doesn't have >> them. > > Perhaps he no longer feels frustrated but still he has no idea what he's talking about! He only *thinks* he knows what he is talking about and *behaves* as if he knows what he is talking about. >> Perhaps you see the problem I am getting at and you are >> trying to get around it by saying that Cram would become a zombie. > > I have only this question unanswered in my mind: "How much more complete of a zombie does Cram become as a result of the surgeon's long and tedious process of reprogramming his brain to make him seem to function normally despite his inability to experience understanding? When the surgeon finally finishes with him such that he passes the Turing test, will the patient even know of his own existence?" Why do you think the surgeon needs to do anything to the rest of his brain? The p-neurons by definition accept input from the auditory cortex, process it and send output to the rest of the brain exactly the same as the c-neurons do. That's their one and only task, and the surgeon's task is to install them in the right place causing as little damage to the rest of the brain as possible. And if the p-neurons duplicate the I/O behaviour of c-neurons, the behaviour of the rest of the brain and the person as a whole must be the same. It must! Are you still trying to say that the p-neurons *won't* be able to duplicate the I/O behaviour of the c-neurons due to lacking understanding? Then you have to say that p-neurons (zombie or weak AI neurons) are impossible, that there is something non-algorithmic about the behaviour of neurons. But you seem very reluctant to agree to this. Instead, you put yourself in a position where you have to say that Cram lacks understanding, but behaves as if he has understanding and believes that he has understanding; in which case, we could all be Cram and not know it. -- Stathis Papaioannou From lcorbin at rawbw.com Sun Jan 10 01:17:55 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 09 Jan 2010 17:17:55 -0800 Subject: [ExI] Corporate Misbehavior (was Avatar: misanthropy in three dimensions) In-Reply-To: References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> <4B4918FA.8000708@rawbw.com> Message-ID: <4B492AC3.4030208@rawbw.com> BillK writes > Yes, people have been turned against corporations and more > to pro-government solutions. But the problem in the US is that the > corporations have become the government. I quite agree! I didn't mean to imply that corporations are run by public spirited saints. On the contrary, like in politics, the systems evolve people into positions of power who are not exactly like your next door neighbor. But if restrained by competition, corporations are to a very great extent working on behalf of the public, just as Adam Smith explained was true of candlestick makers. But not that they, the corporations, like it one bit. J. P. Morgan constantly complained about "ruinous competition", and was largely successful in taming it, partly through mergers and monopolies, but most effectively in promoting government regulation. Do you think that the airlines, for example, *like* being deregulated? Nothing warms the corporate heart as much, or puts as much money in its pockets, as cozy regulation by sympathetic government types. Government is corporations' chief instrument of exercising power, these days. And unfortunately, most people totally miss the point when they call for more government regulation! Some 70,000 pages of new regulations are inflicted on the public each year by congress here in the USA. Oops. Did I say "congress"? Actually, far from it. If it were congress, then that at least would be somewhat constitutional. Instead, the regulatory agents are who regulate American life, as a part of an evil entity we call "the Administration". And they work hand in hand with corporations to suppress competition, and especially the easy entry into the market of those who would challenge monopoly. > So the mass of the US people who are rapidly becoming > the poor and / or unemployed have to find a different > government to be pro-. Again, I totally agree! The present western governments need to be slowly disbanded, agency by agency. It will be painful (so painful that, of course, it will never happen), but the alternative is slow death by regulation. Which will happen. Lee From stathisp at gmail.com Sun Jan 10 01:29:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 12:29:51 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <483875.29296.qm@web36507.mail.mud.yahoo.com> References: <4B48E373.9030607@satx.rr.com> <483875.29296.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/1/10 Gordon Swobe : > But the computer simulation of it won't have consciousness any more than will a simulation of an ice cube have coldness. Computer simulations of things do not equal the things they simulate. (I wish I had a nickel for every time I've said that here :-) But the computer simulation can drive a robot to behave like the thing it is simulating, and this robot can be installed in place of part of the brain. The result must be (*must* be; I wish I had 5c for every time I've said that here) that the person with the cyborgised brain behaves normally and believes that everything is normal. So either you must allow that it is coherent to speak of a pseudo-understanding which is subjectively and objectively indistinguishable from true understanding, or you must admit that the original premise, that the robot part lacks understanding, is false. The only other way out is to deny that it is possible to make such robot parts because there is something about brain physics which is not computable. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 10 01:53:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 12:53:27 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <855333.87547.qm@web36505.mail.mud.yahoo.com> <4B48E373.9030607@satx.rr.com> Message-ID: 2010/1/10 BillK : > I think what Gordon might be trying to say is that the brain is not a > *digital* computer. > > Digital computers separate data and program. > > The brain is more like an analogue computer. It is not like a digital > computer that runs a program stored in memory. The brain *is* the > program and *is* the computer. And it is a constantly changing > analogue computer as it grows new paths and links. There are no brain > programs that resemble computer programs stored in a coded format > since all the programming and all the data is built into neuronal > networks. > > If you want to get really complicated, you can think of the brain as > multiple analogue computers running in parallel, processing different > functions, all growing and changing and passing signals between > themselves. No-one claims that the brain is a digital computer, but it can be simulated by a digital computer. The ideal analogue computer cannot be emulated by a digital computer because it can use actual real numbers. However, the real world appears to be quantised rather than continuous, so actual analogue computers do not use real numbers. And even if the world turned out to be continuous factors such as thermal noise would make all the decimal places after the first few in any parameter irrelevant, so there would be no need to use infinite precision arithmetic to simulate an analogue device. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 10 02:06:52 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 13:06:52 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <4B4919A6.6050600@satx.rr.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> <4B4919A6.6050600@satx.rr.com> Message-ID: 2010/1/10 Damien Broderick : > On 1/9/2010 5:41 PM, Stathis Papaioannou wrote: >> >> More to the point, what is the difference between real understanding >> and pseudo-understanding? If I can use a word appropriately in every >> context, then ipso facto I understand that word. > > Not relevant. What you can do is exactly beside the point when discussing > what robot systems can do. A good Google translation now can cough up a > reliable translation from Hungarian (I know, I used it the other day to turn > part of one of my human-translated papers back into English). It would be > perverse to claim that the Google system understood the words being > translated, even though the complex program was able to find appropriate > English words and syntax. I understood it, the machine didn't. I specified "use a word appropriately in every context"; Google can't as yet do that. It is possible for a human to translate one language into another language using a dictionary despite understanding neither language. In order to understand it he has to have another dictionary so he can associate words in the unknown language with words in a language he does know, and in turn he associates words in the known language with objects in the real world. The objects in the real world are themselves only known through sense data, which is basically just more symbols, not the object itself. So it's syntactical relationships all the way down. What else could understanding possibly be? -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 10 02:46:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 13:46:08 +1100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: 2010/1/10 Max More : > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) I haven't seen the film, but in general I think it's a good thing if the more powerful party consider that they may be unfair in their dealings with the the less powerful party, even if it isn't true. When the aliens arrive I would be happier if they are benevolent and self-doubting rather than benevolent and completely confident that they are doing the right thing. -- Stathis Papaioannou From moulton at moulton.com Sun Jan 10 04:33:07 2010 From: moulton at moulton.com (moulton at moulton.com) Date: 10 Jan 2010 04:33:07 -0000 Subject: [ExI] Avatar: misanthropy in three dimensions Message-ID: <20100110043307.96499.qmail@moulton.com> On Sat, 2010-01-09 at 14:38 -0800, spike wrote: > Very much so, thanks Fred. Today's 12 yr olds have had avatar based video > games their entire lives. I didn't play one until 1986. Fred you and I > would have been in our mid 20s by then. Well to be honest in 1986 I was well beyond my mid 20s. Your comment brings to mind something a friend told me recently about teaching the son to drive about 5 years ago. It was much easier and the son learned more quickly than his sister who was about 4 years older than him. When the parent remarked to the son about how much easier it was to teach him than the older sister the son replied it was because he had played so many video games including some which involved driving virtual vehicles. It would be an interesting study to see if those who learn to drive more easily due to playing video games have a higher or lower accident rate. Fred From ddraig at gmail.com Sun Jan 10 06:12:05 2010 From: ddraig at gmail.com (ddraig) Date: Sun, 10 Jan 2010 17:12:05 +1100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <20100110043307.96499.qmail@moulton.com> References: <20100110043307.96499.qmail@moulton.com> Message-ID: 2010/1/10 : > It would be an interesting study to see if those who learn to drive more easily > due to playing video games have a higher or lower accident rate. There is also a study showing that surgeons who are frequent gamers are also better at microsurgery, due to the improved hand-eye coordination gaming gives them Dwayne -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From bbenzai at yahoo.com Sun Jan 10 13:31:13 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 10 Jan 2010 05:31:13 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <718501.52709.qm@web113618.mail.gq1.yahoo.com> > Damien Broderick wrote: > > On 1/9/2010 5:41 PM, Stathis Papaioannou wrote: > > More to the point, what is the difference between real > understanding > > and pseudo-understanding? If I can use a word > appropriately in every > > context, then ipso facto I understand that word. > > Not relevant. What you can do is exactly beside the point > when > discussing what robot systems can do. A good Google > translation now can > cough up a reliable translation from Hungarian (I know, I > used it the > other day to turn part of one of my human-translated papers > back into > English). It would be perverse to claim that the Google > system > understood the words being translated, even though the > complex program > was able to find appropriate English words and syntax. I > understood it, > the machine didn't. Google isn't a robot. What a human can do *is* relevant to what a robot can do, because they both not only have a brain, but also a body. The word "move" is meaningless to Google because it has no experience of moving, so all it can do is relate it to another word in another language. To a system that does have the means of movement (whether that be via a real-world body or in a simulated environment), it has an experience of what moving is like. That's the 'symbol grounding' it needs to make sense of the word. It now has a meaning. "Using the word appropriately in every context" means that if you say "Could you move 2 metres to your left?" the system will be able to answer yes or no, and do it or not, depending on it's physical state and environment. Moving 2 metres to the left is meaningless to Google, because Google doesn't have legs (or wheels, etc.). If you hooked Google up to a robotic (or virtual) body, and gave it the means to sense the environment, and move the body, and hooked up words to actions, then it would be capable of understanding (assigning meaning to) the words, because they would now have a context. Ben Zaiboc From stathisp at gmail.com Sun Jan 10 14:46:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 01:46:22 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <718501.52709.qm@web113618.mail.gq1.yahoo.com> References: <718501.52709.qm@web113618.mail.gq1.yahoo.com> Message-ID: 2010/1/11 Ben Zaiboc : > Google isn't a robot. ?What a human can do *is* relevant to what a robot can do, because they both not only have a brain, but also a body. The word "move" is meaningless to Google because it has no experience of moving, so all it can do is relate it to another word in another language. > > To a system that does have the means of movement (whether that be via a real-world body or in a simulated environment), it has an experience of what moving is like. ?That's the 'symbol grounding' it needs to make sense of the word. ?It now has a meaning. > > "Using the word appropriately in every context" means that if you say "Could you move 2 metres to your left?" the system will be able to answer yes or no, and do it or not, depending on it's physical state and environment. ?Moving 2 metres to the left is meaningless to Google, because Google doesn't have legs (or wheels, etc.). > > If you hooked Google up to a robotic (or virtual) body, and gave it the means to sense the environment, and move the body, and hooked up words to actions, then it would be capable of understanding (assigning meaning to) the words, because they would now have a context. It gets a bit tricky when you talk about a virtual body in a virtual environment. There may be a mapping between what happens in the computer when it follows an instruction to move two metres to the left and moving two metres to the left in the real world, but there is no basis for saying that this is what the symbols in the computer "mean", since there are also other possible mappings. -- Stathis Papaioannou From gts_2000 at yahoo.com Sun Jan 10 15:05:54 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 07:05:54 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <53116.2093.qm@web36506.mail.mud.yahoo.com> --- On Sat, 1/9/10, Stathis Papaioannou wrote: >> After leaving the hospital, the patient might tell you > he believes in Santa Claus, but he won't actually "believe" > in it; that is, he won't have a conscious subjective > understanding of the meaning of "Santa Claus". > > He has no understanding of words before the operation, but > he still has understanding! If he sees a dog he knows it's a dog, To think coherently about dogs or about anything else, one must understand words and this poor fellow cannot understand his own spoken or unspoken words or the words of others. At all. He completely lacks understanding of words, Stathis. Suffering from complete receptive aphasia, he has no coherent thoughts whatsoever. We can suppose less serious aphasias if you like, but to keep our experiment pure I have assumed complete receptive aphasia. With b-neurons or possibly with m-neurons we can cure him. We p-neurons we can only program him to speak and behave in a way that objective observers will find acceptable, i.e., we can program him to pass the Turing test. > But you claim that Cram will actually have no understanding of > "dog" despite all this. That is what seems absurd: what else could it > possibly mean to understand a word if not to use the word appropriately > and believe you know the meaning of the word? Although Cram uses the word "dog" appropriately after the operation, he won't believe he knows the meaning of the word, i.e., he will not understand the word "dog". If that seems absurd to you, remember that he did not understand it before the operation either. In this respect nothing has changed. -gts From jonkc at bellsouth.net Sun Jan 10 16:03:48 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 11:03:48 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) In-Reply-To: <201001080840.o088ec6A009055@andromeda.ziaspace.com> References: <201001080840.o088ec6A009055@andromeda.ziaspace.com> Message-ID: <71CD007B-C594-4E9A-A99C-07F6E18F009E@bellsouth.net> On Jan 8, 2010, Max More wrote: > it would shake up physics and expand our horizons and potentially open new avenues to enhancement. So, I have nothing intrinsically against it. I have nothing against Psi either and I wish it were true, I wish cold fusion worked too, but wishing does not make it so. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Jan 10 16:04:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 08:04:49 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> Message-ID: <342642.53797.qm@web36505.mail.mud.yahoo.com> --- On Sat, 1/9/10, Stefano Vaj wrote: > I suspect that Gordon is simply a dualist, with emotional > reasons to think that conscience must be something "special" Not at all Stefano. I consider the world as made of just one kind of stuff, not two or more. In fact I just wrote something yesterday to Damien about how I consider consciousness as a physical state the brain can enter in a manner analogous to that by which water enters a state of solidity. When the brain enters that state, it has a feature we call experience. When it leaves that state, it doesn't. In other words, I think subjective experience exists as part of the same physical world in which we find gum-ball machines, mountains and basketballs. It differs from those things only in that it has a first-person ontology. We can't approach it the same way that we do things with third-person ontologies but this does not in any way make it other-worldly or non-physical. -gts From gts_2000 at yahoo.com Sun Jan 10 15:44:48 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 07:44:48 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <417464.50273.qm@web36502.mail.mud.yahoo.com> --- On Sat, 1/9/10, Stathis Papaioannou wrote: > I specified "use a word appropriately in every context"; > Google can't as yet do that. If and when Google does do that, it will have weak AI or AGI. One might argue that it already has primitive weak AI. Strong AI means something beyond that. It means something beyond merely "using words appropriately in every context". It means "actually knowing the meanings of the words used appropriately". It means "having a mind in the sense that humans have minds, complete with mental contents (semantics)". I can program my computer to answer "Yes" to the question "Do you have something in mind right now?" Will my computer then actually have something in mind when it executes that operation in response to my question? I might find it amusing to imagine so, but I also understand the difference between reality and the things I imagine. -gts From jonkc at bellsouth.net Sun Jan 10 16:17:12 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 11:17:12 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46C2BB.5000003@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> Message-ID: <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> On Jan 8, 2010, Damien Broderick wrote: > There's no shortage of weird ideas to explain the weird phenomena labeled "psi" There are indeed a lot of explanations of Psi, too many, few of them rational and none of them clear. I think the moral is that before you develop an elaborate theory to explain something make sure that there is an actual phenomena that needs explaining. After well over a century's effort not only have Psi "scientists" failed to explain how it works they haven't even shown that it exists. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Jan 10 16:28:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 08:28:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <514861.16402.qm@web36507.mail.mud.yahoo.com> --- On Sat, 1/9/10, Stathis Papaioannou wrote: > 2010/1/10 BillK : > >> I think what Gordon might be trying to say is that the >> brain is not *digital* computer. Yes Bill, you understand me correctly. Stathis writes: > No-one claims that the brain is a digital computer, but it > can be simulated by a digital computer. If you think simulations of brains on digital computers will have everything real brains have then you must think natural brains work like digital computers. But they don't. -gts From jonkc at bellsouth.net Sun Jan 10 16:32:16 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 11:32:16 -0500 Subject: [ExI] Psi (no need to read this post you already know what it says) In-Reply-To: <19052174.225601262904435226.JavaMail.defaultUser@defaultHost> References: <19052174.225601262904435226.JavaMail.defaultUser@defaultHost> Message-ID: On Jan 7, 2010, scerir wrote: > hey, there are amazing experiments here > http://www.parapsych.org/online_psi_experiments.html > http://www.fourmilab.ch/rpkp/experiments To tell the truth I don't find anything very amazing about them, lots of people know how to type. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Jan 10 16:35:08 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 17:35:08 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <342642.53797.qm@web36505.mail.mud.yahoo.com> References: <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <342642.53797.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21001100835s3f4529f7o1546c3d1ce1c209b@mail.gmail.com> 2010/1/10 Gordon Swobe : > In other words, I think subjective experience exists as part of the same physical world in which we find gum-ball machines, mountains and basketballs. That's crystal clear. I am only saying that the independent "phyisical existence" of something defined as subjective experience, and its hypothetical connection with organic brains, is for you a matter of faith, altogether outside of any kind of scientific proof or disproof. -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 10 16:42:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 17:42:41 +0100 Subject: [ExI] conscience In-Reply-To: <4B490633.8070900@satx.rr.com> References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <4B490633.8070900@satx.rr.com> Message-ID: <580930c21001100842g755ffb05laa684c179791d93a@mail.gmail.com> 2010/1/9 Damien Broderick : > This is a curious lexical error that non-English writers often make in > English, so I assume there must be only a single word in their languages for > the two very different concepts "conscience" ("the inner sense of what is > right or wrong in one's conduct or motives, impelling one toward right > action") and "consciousness" ( "the state of being conscious; awareness of > one's own existence, sensations, thoughts, surroundings, etc."). I dimly > recall that this is so in French. If so, how do you convey the difference in > Italian, etc? Yes, you are absolutely right. I should have known better, but the truth is that one tends to think in its own mother tongue, and in Neolatin languages those are just two meanings of a single word. See, for an opposite example, "umanesimo" (which mostly refers to the Renaissance cultural movement, and more in general to the overcoming/refusal of theocentrism) and "umanismo" (which does not imply any secularism in Italian, and mostly refers to i) humanities as opposed to hard sciences, ii) anti-transhumanism ii) a kind of vague, politically correct "humanitarian" attitude). -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 10 16:47:53 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 17:47:53 +0100 Subject: [ExI] conscience In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <4B490633.8070900@satx.rr.com> Message-ID: <580930c21001100847t3b529249kc181f18879fff228@mail.gmail.com> 2010/1/10 BillK : > I don't speak Italian, but it seems reasonable that Italians could say > 'coscienza morale' when they mean conscience. Yes. Or, more often, you pick the right meaning from the context (as in "esame di coscienza" before undergoing confession). But we have the distinction between conscious and conscientious (even though the second term refers to diligence and scrupulous more than to morality). -- Stefano Vaj From gts_2000 at yahoo.com Sun Jan 10 16:48:08 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 08:48:08 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001100835s3f4529f7o1546c3d1ce1c209b@mail.gmail.com> Message-ID: <205764.81556.qm@web36502.mail.mud.yahoo.com> --- On Sun, 1/10/10, Stefano Vaj wrote: >> In other words, I think subjective experience exists >> as part of the same physical world in which we find gum-ball >> machines, mountains and basketballs. > > That's crystal clear. I am only saying that the independent > "phyisical existence" of something defined as subjective experience, > and its hypothetical connection with organic brains, is for you a > matter of faith, altogether outside of any kind of scientific proof > or disproof. A matter of faith? Do you deny the existence of your own experience or its connection with your brain? I assert that 1) I have experience and 2) my experience goes away when someone whacks me in the head with a baseball bat. If you call that a statement of faith then I suppose I'm devout believer. :) -gts From stefano.vaj at gmail.com Sun Jan 10 17:07:27 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 18:07:27 +0100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> 2010/1/9 Max More : > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) Neither have I, and I am quite impatient to get it on 3d blu-ray. I am under the impression that it is quite popular amongst, e.g., my fellow Cosmic Engineers, e.g., but another quite sobering (albeit from a POV rather different from mine...) review can be found here: http://io9.com/5422666/when-will-white-people-stop-making-movies-like-avatar -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 10 17:05:24 2010 From: spike66 at att.net (spike) Date: Sun, 10 Jan 2010 09:05:24 -0800 Subject: [ExI] trained by simulation: was RE: Avatar: misanthropy in three dimensions In-Reply-To: <20100110043307.96499.qmail@moulton.com> References: <20100110043307.96499.qmail@moulton.com> Message-ID: > ...On Behalf Of moulton at moulton.com > Subject: Re: [ExI] Avatar: misanthropy in three dimensions > > > On Sat, 2010-01-09 at 14:38 -0800, spike wrote: > > ...Fred you and I would have been in our mid 20s by then. > > Well to be honest in 1986 I was well beyond my mid 20s... The years have been kind to you. Whatever you are doing, keep it up. > ...teaching the son to drive about 5 years ago. It was > much easier and the son learned more quickly than his sister > ... because he had played so many video games... Fred Same with my own experience with flight simulators. I had the rare opportunity to fly a Pitts Special (aerobatic stunt plane.) From playing with flight simulators, I had a really good intuitive feel for what one can do in such a bird. For instance, the Pitts has a huge long nose sticking way out, and the wing chord is parallel with the fusilage, so to fly straight and level one must fly with the nose up. http://en.wikipedia.org/wiki/Pitts_Special Since the cockpit is so far aft, when in straight and level flight, the pilot cannot see where she is going. So the easiest way to see in those things is to fly upside down. That would bother some pilots, but in the flight simulators, inverted flight is also the best way to do look around, so it seemed natural to me first time I took the controls. Flying upside down is actually more comfortable on the computer I found. And if you push on the stick while inverted, the whole red-out negative G thing hurts. But it seems like we should be able to extend the trained-by-simulator concept beyond having drivers and pilots very skilled before they ever climb behind the wheel or into the cockpit. What else? Surgeons? spike From jonkc at bellsouth.net Sun Jan 10 17:13:10 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 12:13:10 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> On Jan 9, 2010, Gordon Swobe wrote: > Human operators ascribe meanings to the symbols their computers manipulate. Computers ascribe meaning to symbols too, if they didn't they would treat all symbols the same. They don't. And if I had written the quote attributed to Searle where he talks about the meaning of meaningless symbols I would have been deeply embarrassed. > It's an understandable mistake; after all it sure *looks* like computers understand the meanings. And it sure *looks* like humans understand the meanings too, but who knows. > The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" If the computer did understand the meaning you think the machine would continue to operate exactly as it did before, back when it didn't have the slightest understanding of anything. So, given that understanding is a completely useless property why should computer scientists even bother figuring out ways to make a machine understand? Haven't they got anything better to do? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frankmac at ripco.com Sun Jan 10 17:41:45 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Sun, 10 Jan 2010 12:41:45 -0500 Subject: [ExI] Late to the subject Message-ID: <001e01ca921c$33225cb0$ad753644@sx28047db9d36c> The Russian Gov't is preparing to Nuke an astroid that will be visiting the earth's orbit in 2016. The Russians quote an odd of 33 to 1 it will hit this planet. The United States bookmakers quote the odds at 2000 to 1. I will take 2000 to 1, but after the climate change cooking of the books, does the proverb, "fool me once shame on you, fool me twice shame on me" take hold concerning this subject. Oh by the bye when they came up with 2000-1 m, if you add them together "2001 a Space Odessy" comes to mind and thus these odds do not even pass the smell test by my nose, and if so should I begin to root for Moscow to suceed in it's blast in space. Only bring it up because of the Bruce Willis movie which had the same plot from a few years ago. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 10 17:58:32 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 11:58:32 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> Message-ID: <4B4A1548.6050407@satx.rr.com> On 1/10/2010 10:17 AM, John Clark wrote: > I think the moral is that before you develop an elaborate theory to > explain something make sure that there is an actual phenomena that needs > explaining. After well over a century's effort not only have Psi > "scientists" failed to explain how it works they haven't even shown that > it exists. You *don't* know that, because you refuse to look at the published evidence##, because it's not in the Journal of Recondite Physics and Engineering. But I tend to agree that theorizing in advance of empirical evidence is pretty pointless--and yet the usual objection to psi from heavy duty scientists at the Journal of Recondite Physics and Engineering is, "We don't care about your anomalies, because you don't have a *theory* that predicts and explains them." My guess is that at some point a new comprehensive theory of spacetime and symmetry will emerge to account for quantum gravity, say, if the Higgs fails to appear, and one of its elements will be the surprise finding that certain psi functions fall out of the equations. Which is why it makes no sense to bet on when the topic will finally be deemed publishable in Nature or Science. You can bring in plenty of evidence of a small effect size, but without a theory to make everyone comfortable the evidence will be ignored. I *suspect* the same might be true of "cold fusion." There does seem to be quite a lot of evidence, but as yet no acceptable theory, so it's easier to assume it's a tale told by an idiot signifying nothing. But as I've said before, my dog isn't in that race so I don't know enough about the form at the track. Damien Broderick ##Here's a typical example of this sort of self-satisfied dismissal; I quote at some length from my book OUTSIDE THE GATES OF SCIENCE: From scerir at libero.it Sun Jan 10 18:09:25 2010 From: scerir at libero.it (scerir) Date: Sun, 10 Jan 2010 19:09:25 +0100 (CET) Subject: [ExI] Psi (no need to read this post you already know what it says) Message-ID: <12761044.404421263146965979.JavaMail.defaultUser@defaultHost> > hey, there are amazing experiments here > http://www.parapsych.org/online_psi_experiments.html > http://www.fourmilab.ch/rpkp/experiments To tell the truth I don't find anything very amazing about them, lots of people know how to type. John K Clark # Well, my score was good with the clock and the pendulum. But maybe you are right.. There are more concrete and amazing things, like the MWI, the anthropic principle, and the like. From spike66 at att.net Sun Jan 10 18:12:47 2010 From: spike66 at att.net (spike) Date: Sun, 10 Jan 2010 10:12:47 -0800 Subject: [ExI] Late to the subject In-Reply-To: <001e01ca921c$33225cb0$ad753644@sx28047db9d36c> References: <001e01ca921c$33225cb0$ad753644@sx28047db9d36c> Message-ID: <36BA15ECAD7547C09FA9F7394A649F2D@spike> On Behalf Of Frank McElligott Subject: [ExI] Late to the subject >...The Russian Gov't is preparing to Nuke an astroid that will be visiting the earth's orbit in 2016. The Russians quote an odd of 33 to 1 it will hit this planet. The United States bookmakers quote the odds at 2000 to 1... Frank Frank, nuking an asteroid is such a difficult flight control problem that I will predict failure should they attempt it. There is a cruel irony attached to these kinds of missions: if they succeed and the asteroid breaks up, if any part of the asteroid does manage to re-enter, then the commies will be liable. spike From jonkc at bellsouth.net Sun Jan 10 17:53:24 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 12:53:24 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <483875.29296.qm@web36507.mail.mud.yahoo.com> References: <483875.29296.qm@web36507.mail.mud.yahoo.com> Message-ID: On Jan 9, 2010 Gordon Swobe wrote: > I think consciousness will likely turn out to be just a state that natural brains can enter not unlike water can enter a state of solidity. In a way I sort of agree with that, but I don't see why a computer couldn't do the same thing. And not all water is solid, it's a function of temperature. In your analogy what is the equivalent of temperature? We have enormously powerful evidence that it must be intelligence. We know from direct experience that there is a one to one correspondence between consciousness and intelligence; when we are intelligent we are conscious and when we are not intelligent, as in when we are sleeping or under anesthesia, we are not conscious. > Some people seem to think that if we can compute X on a computer then a computer simulation of X must equal X. But that's just a blatant non sequitur. So if I add 2+2 on my computer and you add 2+2 on your computer it's a blatant non sequitur to think that my 4 is the same as your 4. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Sun Jan 10 18:29:40 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 10 Jan 2010 13:29:40 -0500 Subject: [ExI] Psi (no need to read this post you already know what it says) In-Reply-To: <12761044.404421263146965979.JavaMail.defaultUser@defaultHost> References: <12761044.404421263146965979.JavaMail.defaultUser@defaultHost> Message-ID: <4e3a29501001101029q58b942fcua0c7d84dd8d4625f@mail.gmail.com> Maybe the brain can somehow sort entangled atoms into coherent mental structures that respond to the nonlocal squirming of other, similar structures--that, because of our place in time and space, we have a large number of these particles and have evolved a mental system to control them and even a "language" of sorts, probably close or identical to our brain's neurological language, allowing us to communicate there. This is easy to imagine from an evolutionary standpoint. Humans whose brains could communicate in the tiniest amounts could react sooner to threats and bond closer socially. The mechanism to entangle the entangled would grow in complexity, maybe even contributing to the growth of language. This could even imply the idea that telepathic scenarios often involve people close to the receiver. Maybe the number of shared particles. Can entangled particles be spread by phone or computer? The brain is already well known for the seemingly implausible things it manages to obtain, especially in the sorting category--memories and senses and all that. If the brain can manipulate the information carried by minute electrical pulses, why not allow it to recognize and use entangled particles? Particles that behave differently can be categorized by that nature. The brain could do it. I think this is theoretically possible, much moreso than other explanations, and that it could be an important missing piece in fields from linguistics to evolutionary biology to modern physics. Somebody who is learned, please lend us the answer as to if this is actually possible in quantum physics or if I just have a case of "wikipedia PhD." -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 10 18:59:16 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 12:59:16 -0600 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain Message-ID: <4B4A2384.7030700@satx.rr.com> New Scientist: You won't find consciousness in the brain 7 January 2010 by Ray Tallis [Raymond Tallis wrote a wonderful deconstruction of deconstruction and poststructuralism, NOT SAUSSURE] MOST neuroscientists, philosophers of the mind and science journalists feel the time is near when we will be able to explain the mystery of human consciousness in terms of the activity of the brain. There is, however, a vocal minority of neurosceptics who contest this orthodoxy. Among them are those who focus on claims neuroscience makes about the preciseness of correlations between indirectly observed neural activity and different mental functions, states or experiences. This was well captured in a 2009 article in Perspectives on Psychological Science by Harold Pashler from the University of California, San Diego, and colleagues, that argued: "...these correlations are higher than should be expected given the (evidently limited) reliability of both fMRI and personality measures. The high correlations are all the more puzzling because method sections rarely contain much detail about how the correlations were obtained." Believers will counter that this is irrelevant: as our means of capturing and analysing neural activity become more powerful, so we will be able to make more precise correlations between the quantity, pattern and location of neural activity and aspects of consciousness. This may well happen, but my argument is not about technical, probably temporary, limitations. It is about the deep philosophical confusion embedded in the assumption that if you can correlate neural activity with consciousness, then you have demonstrated they are one and the same thing, and that a physical science such as neurophysiology is able to show what consciousness truly is. Many neurosceptics have argued that neural activity is nothing like experience, and that the least one might expect if A and B are the same is that they be indistinguishable from each other. Countering that objection by claiming that, say, activity in the occipital cortex and the sensation of light are two aspects of the same thing does not hold up because the existence of "aspects" depends on the prior existence of consciousness and cannot be used to explain the relationship between neural activity and consciousness. This disposes of the famous claim by John Searle, Slusser Professor of Philosophy at the University of California, Berkeley: that neural activity and conscious experience stand in the same relationship as molecules of H[2]O to water, with its properties of wetness, coldness, shininess and so on. The analogy fails as the level at which water can be seen as molecules, on the one hand, and as wet, shiny, cold stuff on the other, are intended to correspond to different "levels" at which we are conscious of it. But the existence of levels of experience or of description presupposes consciousness. Water does not intrinsically have these levels. We cannot therefore conclude that when we see what seem to be neural correlates of consciousness that we are seeing consciousness itself. While neural activity of a certain kind is a necessary condition for every manifestation of consciousness, from the lightest sensation to the most exquisitely constructed sense of self, it is neither a sufficient condition of it, nor, still less, is it identical with it. If it were identical, then we would be left with the insuperable problem of explaining how intracranial nerve impulses, which are material events, could "reach out" to extracranial objects in order to be "of" or "about" them. Straightforward physical causation explains how light from an object brings about events in the occipital cortex. No such explanation is available as to how those neural events are "about" the physical object. Biophysical science explains how the light gets in but not how the gaze looks out. Many features of ordinary consciousness also resist neurological explanation. Take the unity of consciousness. I can relate things I experience at a given time (the pressure of the seat on my bottom, the sound of traffic, my thoughts) to one another as elements of a single moment. Researchers have attempted to explain this unity, invoking quantum coherence (the cytoskeletal micro-tubules of Stuart Hameroff at the University of Arizona, and Roger Penrose at the University of Oxford), electromagnetic fields (Johnjoe McFadden, University of Surrey), or rhythmic discharges in the brain (the late Francis Crick). These fail because they assume that an objective unity or uniformity of nerve impulses would be subjectively available, which, of course, it won't be. Even less would this explain the unification of entities that are, at the same time, experienced as distinct. My sensory field is a many-layered whole that also maintains its multiplicity. There is nothing in the convergence or coherence of neural pathways that gives us this "merging without mushing", this ability to see things as both whole and separate. And there is an insuperable problem with a sense of past and future. Take memory. It is typically seen as being "stored" as the effects of experience which leave enduring changes in, for example, the properties of synapses and consequently in circuitry in the nervous system. But when I "remember", I explicitly reach out of the present to something that is explicitly past. A synapse, being a physical structure, does not have anything other than its present state. It does not, as you and I do, reach temporally upstream from the effects of experience to the experience that brought about the effects. In other words, the sense of the past cannot exist in a physical system. This is consistent with the fact that the physics of time does not allow for tenses: Einstein called the distinction between past, present and future a "stubbornly persistent illusion". There are also problems with notions of the self, with the initiation of action, and with free will. Some neurophilosophers deal with these by denying their existence, but an account of consciousness that cannot find a basis for voluntary activity or the sense of self should conclude not that these things are unreal but that neuroscience provides at the very least an incomplete explanation of consciousness. I believe there is a fundamental, but not obvious, reason why that explanation will always remain incomplete - or unrealisable. This concerns the disjunction between the objects of science and the contents of consciousness. Science begins when we escape our subjective, first-person experiences into objective measurement, and reach towards a vantage point the philosopher Thomas Nagel called "the view from nowhere". You think the table over there is large, I may think it is small. We measure it and find that it is 0.66 metres square. We now characterise the table in a way that is less beholden to personal experience. Thus measurement takes us further from experience and the phenomena of subjective consciousness to a realm where things are described in abstract but quantitative terms. To do its work, physical science has to discard "secondary qualities", such as colour, warmth or cold, taste - in short, the basic contents of consciousness. For the physicist then, light is not in itself bright or colourful, it is a mixture of vibrations in an electromagnetic field of different frequencies. The material world, far from being the noisy, colourful, smelly place we live in, is colourless, silent, full of odourless molecules, atoms, particles, whose nature and behaviour is best described mathematically. In short, physical science is about the marginalisation, or even the disappearance, of phenomenal appearance/qualia, the redness of red wine or the smell of a smelly dog. Consciousness, on the other hand, is all about phenomenal appearances/qualia. As science moves from appearances/qualia and toward quantities that do not themselves have the kinds of manifestation that make up our experiences, an account of consciousness in terms of nerve impulses must be a contradiction in terms. There is nothing in physical science that can explain why a physical object such as a brain should ascribe appearances/qualia to material objects that do not intrinsically have them. Material objects require consciousness in order to "appear". Then their "appearings" will depend on the viewpoint of the conscious observer. This must not be taken to imply that there are no constraints on the appearance of objects once they are objects of consciousness. Our failure to explain consciousness in terms of neural activity inside the brain inside the skull is not due to technical limitations which can be overcome. It is due to the self-contradictory nature of the task, of which the failure to explain "aboutness", the unity and multiplicity of our awareness, the explicit presence of the past, the initiation of actions, the construction of self are just symptoms. We cannot explain "appearings" using an objective approach that has set aside appearings as unreal and which seeks a reality in mass/energy that neither appears in itself nor has the means to make other items appear. The brain, seen as a physical object, no more has a world of things appearing to it than does any other physical object. Profile Ray Tallis trained as a doctor, ultimately becoming professor of geriatric medicine at the University of Manchester, UK, where he oversaw a major neuroscience project. He is a Fellow of the Academy of Medical Sciences and a writer on areas ranging from consciousness to medical ethics From scerir at libero.it Sun Jan 10 19:02:54 2010 From: scerir at libero.it (scerir) Date: Sun, 10 Jan 2010 20:02:54 +0100 (CET) Subject: [ExI] Psi (no need to read this post you already know what it says) Message-ID: <20160872.394151263150174655.JavaMail.defaultUser@defaultHost> > Somebody who is learned, please lend us the answer as to if this is actually > possible in quantum physics or if I just have a case of "wikipedia PhD." > Will Steinberg I'm not learned enough but it seems to me that ... you need more than contemporary physics. Oh, wait, there is something here that might interest people on this list (for obvious reasons, you'll see). Compatibility of Contemporary Physical Theory with Personality Survival. http://www-physics.lbl.gov/~stapp/Compatibility.pdf Henry P. Stapp Theoretical Physics Group Lawrence Berkeley National Laboratory University of California Berkeley, California 94705 From thespike at satx.rr.com Sun Jan 10 19:19:59 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 13:19:59 -0600 Subject: [ExI] Psi (no need to read this post you already know what it says) In-Reply-To: <20160872.394151263150174655.JavaMail.defaultUser@defaultHost> References: <20160872.394151263150174655.JavaMail.defaultUser@defaultHost> Message-ID: <4B4A285F.9000504@satx.rr.com> On 1/10/2010 1:02 PM, scerir quoted: > Henry P. Stapp > Theoretical Physics Group > Lawrence Berkeley National Laboratory But, says John Clark, why should we pay any attention to this bozo, who is probably really a truck driver who failed high school and is just pretending to work for the Lawrence Berkeley National Lab, anyone can type. Which raises the key question (or a key question): What sort of evidence for psi phenomena would be publishable in Nature or Science, and how many replications by independent labs would be needed to make it acceptable? And to be acceptable, is it necessary that the scientists involved have no previous history of work in parapsychology? John failed to reply to my comment that once a reputable scientist or other academic reports apparent evidence for psi, he or she immediately falls into the "loony--safe to ignore" category. (Admittedly, some of the most distinguished scientists with an interest in psi do have a loony side, Nobelists included, and maybe they need to in order to get into that area of investigation to begin with. But we also know that Newton spent more time on astrology, alchemy and biblical codes than he did on physics and optics.) Damien Broderick From stefano.vaj at gmail.com Sun Jan 10 21:51:25 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 22:51:25 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <205764.81556.qm@web36502.mail.mud.yahoo.com> References: <580930c21001100835s3f4529f7o1546c3d1ce1c209b@mail.gmail.com> <205764.81556.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21001101351q46b2790dx535efeae912e6019@mail.gmail.com> 2010/1/10 Gordon Swobe > A matter of faith? > Do you deny the existence of your own experience or its connection with > your brain? > The concept of "physical existence of subjective experience" sounds to me as philosophically very naive, and I am still waiting for a definition thereof. As for its connection with organic brains, I do not see its projection on them as much more persuasive than that on plenty of other physical phenomena. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 10 22:07:12 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 16:07:12 -0600 Subject: [ExI] widely, although not universally, believed Message-ID: <4B4A4F90.9080208@satx.rr.com> http://en.wikipedia.org/wiki/Unruh_effect Although Unruh's prediction that an accelerating detector would see a thermal bath is not controversial, the interpretation of the transitions in the detector in the non-accelerating frame are. It is widely, although not universally, believed that each transition in the detector is accompanied by the emission of a particle, and that this particle will propagate to infinity and be seen as Unruh radiation. The existence of Unruh radiation is not universally accepted. Some claim that it has already been observed,[7] while others claims that it is not emitted at all.[8] While the skeptics accept that an accelerating object thermalises at the Unruh temperature, they do not believe that this leads to the emission of photons, arguing that the emission and absorption rates of the accelerating particle are balanced. [I was very disheartened by this, because I thought for a couple of minutes that I might have found an unexpected source for the cosmic background radiation, especially in a still-accelerating cosmos] From bbenzai at yahoo.com Sun Jan 10 23:53:57 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 10 Jan 2010 15:53:57 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <55510.41327.qm@web113604.mail.gq1.yahoo.com> Stathis Papaioannou wrote: > 2010/1/11 Ben Zaiboc : > > If you hooked Google up to a robotic (or virtual) > body, and gave it the means to sense the environment, and > move the body, and hooked up words to actions, then it would > be capable of understanding (assigning meaning to) the > words, because they would now have a context. > > It gets a bit tricky when you talk about a virtual body in > a virtual > environment. There may be a mapping between what happens in > the > computer when it follows an instruction to move two metres > to the left > and moving two metres to the left in the real world, but > there is no > basis for saying that this is what the symbols in the > computer "mean", > since there are also other possible mappings. The meaning of 'two metres to the left' is tied up with signals that represent activating whatever movement system you use (legs, wheels etc.), feedback from that system, confirmatory signals from sensory systems such as differences of visual signals (that picture on the wall is now nearer for instance (as defined by such things as a change in its apparent size)), adjustments in your environment maps, etc, etc., that all fall into the appropriate category. Whether this information is produced by a 'real body' in the 'real world' or a virtual body in a virtual world makes absolutely no difference (after all, we may well be simulations in a simulated world ourselves. Some people think this is highly likely). I imagine it would lead to a pretty precise meaning for whatever internal signal, state or symbol is used for "two metres to the left". Once such a concept is established in the system in question, it can be available for use in different contexts, such as imagining someone else moving two metres to their left, recognising that an object is two metres to your left, etc. It seems to me that in a system of sufficient complexity, with appropriate senses and actuators, 'two metres to the left' is jam-packed with meaning. Ben Zaiboc From stathisp at gmail.com Mon Jan 11 00:43:02 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 11:43:02 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <53116.2093.qm@web36506.mail.mud.yahoo.com> References: <53116.2093.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/11 Gordon Swobe : > --- On Sat, 1/9/10, Stathis Papaioannou wrote: > >>> After leaving the hospital, the patient might tell you >> he believes in Santa Claus, but he won't actually "believe" >> in it; that is, he won't have a conscious subjective >> understanding of the meaning of "Santa Claus". >> >> He has no understanding of words before the operation, but >> he still has understanding! If he sees a dog he knows it's a dog, > > To think coherently about dogs or about anything else, one must understand words and this poor fellow cannot understand his own spoken or unspoken words or the words of others. At all. > > He completely lacks understanding of words, Stathis. Suffering from complete receptive aphasia, he has no coherent thoughts whatsoever. > > We can suppose less serious aphasias if you like, but to keep our experiment pure I have assumed complete receptive aphasia. > > With b-neurons or possibly with m-neurons we can cure him. We p-neurons we can only program him to speak and behave in a way that objective observers will find acceptable, i.e., we can program him to pass the Turing test. The patient with complete receptive aphasia *does* have coherent, if non-verbal thoughts. He can look at a situation, recognise what's going on, make plans for the future. That's thinking. But this is beside the point, as I'm sure you can see. It's easy to change the experiment so that any anatomically localised aspect of consciousness is taken out and replaced with zombie p-neurons. The original example was visual perception. Cram has all the neurons responsible for visual perception replaced and as a consequence (you have to say) he will be completely blind. However, he will behave as if he has normal vision, because the rest of his brain is receiving normal input from the p-neurons. Searle thinks Cram will be blind, notice he is blind, but be unable to do anything about it. This is only possible if Cram is able to think with something other than his brain, as you seem to realise, since you said that maybe Searle didn't really mean what he wrote or it was taken out of context to make him look bad. So there are only two remaining alternatives. One is that Cram is not blind but has perfectly normal vision, because you were wrong about the p-neurons lacking consciousness. The other is that Cram is blind but doesn't notice he is blind: honestly believes that nothing has changed and will tell you that you are crazy for saying he is blind when he can describe everything he sees as well as you can. Another example: we replace the neurons in Cram's pain centre with p-neurons, leaving the rest of the brain intact, then torture him. Cram screams and tells you to stop. You calmly inform him that he is deluded: since he has the p-neurons he isn't in pain, he only behaves and thinks he is in pain. So if you believe that it is possible to make p-neurons which behave just like b-neurons but lacking consciousness/understanding/intentionality then you are saying something very strange. You are saying that any conscious modality such as perception or understanding of language can be selectively removed from your brain (by swapping the relevant b-neurons for p-neurons) and not only will it not affect behaviour, you also will not be able to notice that it has been done. Initially you said that this thought experiment was so preposterous that you couldn't even think about it. Then you said that the p-neurons wouldn't actually behave like b-neurons because you don't believe consciousness is an epiphenomenon, which presents a difficulty because you think zombies are possible and the behaviour of the brain is computable. Later you seemed to be saying that the patient who gets the p-neurons will behave normally but won't notice that an aspect of his consciousness is gone because he will become a complete zombie and therefore won't notice anything at all. What is your latest take on what will happen? >> But you claim that Cram will actually have no understanding of >> "dog" despite all this. That is what seems absurd: what else could it >> possibly mean to understand a word if not to use the word appropriately >> and believe you know the meaning of the word? > > Although Cram uses the word "dog" appropriately after the operation, he won't believe he knows the meaning of the word, i.e., he will not understand the word "dog". If that seems absurd to you, remember that he did not understand it before the operation either. In this respect nothing has changed. He will hear the word "dog" and remember that he has to take his dog for a walk. If you ask him to draw a picture of a dog, a cat and a giraffe he will be able to do it. If you ask him to point to the tallest of the three animals he has drawn he will point to the giraffe. That sounds to me like understanding! What more could you possibly want of the poor fellow? -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 11 01:17:57 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 12:17:57 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <514861.16402.qm@web36507.mail.mud.yahoo.com> References: <514861.16402.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/1/11 Gordon Swobe : >> No-one claims that the brain is a digital computer, but it >> can be simulated by a digital computer. > > If you think simulations of brains on digital computers will have everything real brains have then you must think natural brains work like digital computers. But they don't. Gordon, it's sensible to doubt that a digital computer simulating a brain will have the consciousness that the brain has, since it isn't an atom for atom copy of the brain. What I have done is assume that it won't and see where it leads. It leads to the conclusion that any aspect of your consciousness that is anatomically localised can be selectively removed without your behaviour changing and without you noticing (using the rest of your brain) that there has been any change. This seems absurd, since at the very least, you would expect to notice if you suddenly lost your vision or your ability to understand language. So I am forced to conclude that the initial premise, that the brain simulation was unconscious, was wrong. There are only two other premises in this argument which could be disputed: that brain activity is computable and that consciousness is the result of brain activity. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 11 01:27:35 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 12:27:35 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> Message-ID: 2010/1/11 John Clark : > If the computer did understand the meaning you think the machine would > continue to operate exactly as it did before, back when it didn't have the > slightest understanding of anything. So, given that understanding is a > completely useless property why should computer scientists even bother > figuring out ways to make a machine understand??Haven't they got anything > better to do? Gordon has in mind a special sort of understanding which makes no objective difference and, although he would say it makes a subjective difference, it is not a subjective difference that a person could notice. -- Stathis Papaioannou From thespike at satx.rr.com Mon Jan 11 01:43:05 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 19:43:05 -0600 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> Message-ID: <4B4A8229.4060803@satx.rr.com> On 1/10/2010 7:27 PM, Stathis Papaioannou wrote: > Gordon has in mind a special sort of understanding which makes no > objective difference and, although he would say it makes a subjective > difference, it is not a subjective difference that a person could > notice. I have a sneaking suspicion that what is at stake is volitional initiative, conscious weighing of options, the experience of assessing and then acting. Yes, we know a lot of this experience is illusory, or at least misleading, because a large part of the process of "willing" is literally unconscious and precedes awareness, but still one might hope to have a machine that is aware of itself as a person, not just a tool that shuffles through canned responses--even if that can provide some simulation of a person in action. It might turn out that there's no difference, once such a complex machine is programmed right, but until then it seems to me fair to suppose that there could be. None of this concession will satisfy Gordon, I imagine. Damien Broderick From stathisp at gmail.com Mon Jan 11 02:55:16 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 13:55:16 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <55510.41327.qm@web113604.mail.gq1.yahoo.com> References: <55510.41327.qm@web113604.mail.gq1.yahoo.com> Message-ID: 2010/1/11 Ben Zaiboc : > The meaning of 'two metres to the left' is tied up with signals that represent activating whatever movement system you use (legs, wheels etc.), feedback from that system, confirmatory signals from sensory systems such as differences of visual signals (that picture on the wall is now nearer for instance (as defined by such things as a change in its apparent size)), adjustments in your environment maps, etc, etc., that all fall into the appropriate category. > > Whether this information is produced by a 'real body' in the 'real world' or a virtual body in a virtual world makes absolutely no difference (after all, we may well be simulations in a simulated world ourselves. Some people think this is highly likely). ?I imagine it would lead to a pretty precise meaning for whatever internal signal, state or symbol is used for "two metres to the left". > > Once such a concept is established in the system in question, it can be available for use in different contexts, such as imagining someone else moving two metres to their left, recognising that an object is two metres to your left, etc. > > It seems to me that in a system of sufficient complexity, with appropriate senses and actuators, 'two metres to the left' is jam-packed with meaning. If we find an intelligent robot as sole survivor of a civilisation completely destroyed when their sun went nova, we can eventually work out what its internal symbols mean by interacting with it. If instead we find a computer that implements a virtual environment with conscious observers, but has no I/O devices, then it is impossible even in principle for us to work out what's going on. And this doesn't just apply to computers: the same would be true if we found a biological brain without sensors or effectors, but still dreaming away in its locked in state. The point is, there is no way to step outside of syntactical relationships between symbols and ascribe absolute meaning. It's syntax all the way down. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 11 03:10:24 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 14:10:24 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B4A8229.4060803@satx.rr.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> <4B4A8229.4060803@satx.rr.com> Message-ID: 2010/1/11 Damien Broderick : > I have a sneaking suspicion that what is at stake is volitional initiative, > conscious weighing of options, the experience of assessing and then acting. > Yes, we know a lot of this experience is illusory, or at least misleading, > because a large part of the process of "willing" is literally unconscious > and precedes awareness, but still one might hope to have a machine that is > aware of itself as a person, not just a tool that shuffles through canned > responses--even if that can provide some simulation of a person in action. > It might turn out that there's no difference, once such a complex machine is > programmed right, but until then it seems to me fair to suppose that there > could be. None of this concession will satisfy Gordon, I imagine. If you make a machine that behaves like a human then it's likely that the machine is at least differently conscious. However, if you make a machine that behaves like a human by replicating the functional structure of a human brain, then that machine would have the same consciousness as the human. If it didn't, it would lead to an absurd concept of consciousness as something that could be partly taken out of someone's mind without them either changing their behaviour or realising that anything unusual had happened. -- Stathis Papaioannou From lcorbin at rawbw.com Mon Jan 11 06:14:21 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 10 Jan 2010 22:14:21 -0800 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: <4B4A2384.7030700@satx.rr.com> References: <4B4A2384.7030700@satx.rr.com> Message-ID: <4B4AC1BD.2070607@rawbw.com> Damien Broderick wrote: > New Scientist: You won't find consciousness in the brain I don't understand what is being claimed here. An image developed on a photographic plate is similar in structure to some "extra-camera" physical object, and for me to have thoughts about an extra-cranial object seems similar. So how is this different? Also consider Well, a computer program, or even a pretty simple electromechanical device, can consult records! It seems likely to me that the writer is simply reiterating in some subtle way the desire on the part of many for a "first-person" account of consciousness. Which, I think, is impossible (for the simple reason that as soon as this account is recorded extra-cranially, it becomes objective and no longer first person). Lee From stathisp at gmail.com Mon Jan 11 06:22:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 17:22:19 +1100 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: <4B4AC1BD.2070607@rawbw.com> References: <4B4A2384.7030700@satx.rr.com> <4B4AC1BD.2070607@rawbw.com> Message-ID: 2010/1/11 Lee Corbin : > It seems likely to me that the writer is simply > reiterating in some subtle way the desire on the > part of many for a "first-person" account of > consciousness. Which, I think, is impossible > (for the simple reason that as soon as this > account is recorded extra-cranially, it becomes > objective and no longer first person). I think he's also alluding to the "Hard Problem" of consciousness. The Hard Problem refers to the fact that whatever facts are discovered about the processes underlying consciousness it is always possible to say, "but why should that produce consciousness?" It's not a very helpful question if no possible answer can ever satisfy. -- Stathis Papaioannou From lcorbin at rawbw.com Mon Jan 11 07:09:35 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 10 Jan 2010 23:09:35 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> Message-ID: <4B4ACEAF.2080102@rawbw.com> Stefano writes > ...but another quite sobering (albeit from a > POV rather different from mine...) review can be found here: > http://io9.com/5422666/when-will-white-people-stop-making-movies-like-avatar Of all the reviews linked to so far, this may be addressing the most fundamental, or perhaps most profound, issue. Here's a part that I want to speak about: When will whites stop making these movies and start thinking about race in a new way? First, we'll need to stop thinking that white people are the most "relatable" characters in stories. As one blogger put it: By the end of the film you're left wondering why the film needed the Jake Sully character at all. The film could have done just as well by focusing on an actual Na'vi native who comes into contact with crazy humans who have no respect for the environment. I can just see the explanation: "Well, we need someone (an avatar) for the audience to connect with. A normal guy will work better than these tall blue people." However, this is the type of thinking that molds all leads as white male characters (blank slates for the audience to project themselves upon) unless your name is Will Smith. But more than that, whites need to rethink their fantasies about race. Whites need to stop remaking the white guilt story, which is a sneaky way of turning every story about people of color into a story about being white. Well, the problem goes very deep. People *like* to see stories about white people, because the sad fact is that white people have more status. So it's just as in the old days, hearing stories about kings rather than about paupers (Shakespeare, for example, told relatively few stories about entirely ordinary people). In *whatever* culture, it seems---though there have to be a few exceptions---people want their children to be whiter. And don't forget the Japanese, who treasured the whiteness of their women; and when it turned out to be undeniable that white men actually had complexions whiter than their own women, they went into denial about it for some time. Now if that weren't bad enough, *film*, i.e. the nature of film, is also in on the conspiracy against people of color. White faces simply show up much better than dark faces in movies or portraits. Perhaps this is indeed evidence of (a malevolent) God's existence after all: how else the double whammy? Speaking as a white person, I don't need to hear more about my own racial experience. Well, this isn't just about you, buster. I'd like to watch some movies about people of color (ahem, aliens), from the perspective of that group, without injecting a random white (erm, human) character to explain everything to me. Okay, go get a producer to make a movie consisting entirely of black people. The only reference in the movie to white people could be that "they all died out long ago from their own corporate greed and unfriendliness to their environment". Then just see how well your movie does at the box office. (I also imagine that it has cost Disney a pretty penny to present non-white centered characters in movies, animation, and cable series.) Look, the solution to the problem is simple, and we need only wait patiently a few more years. Soon one will be able to control the amount of color that children will be born with, and not long after that people will themselves be able to undergo whitening processes a lot cheaper, more effective, and easier than Michael Jackson's. Then everybody can be as white as they please, and films will become truly equal opportunity. Lee From lcorbin at rawbw.com Mon Jan 11 07:13:53 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 10 Jan 2010 23:13:53 -0800 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: References: <4B4A2384.7030700@satx.rr.com> <4B4AC1BD.2070607@rawbw.com> Message-ID: <4B4ACFB1.4020406@rawbw.com> Stathis writes > Lee wrote > >> It seems likely to me that the writer is simply >> reiterating in some subtle way the desire on the >> part of many for a "first-person" account of >> consciousness. Which, I think, is impossible >> (for the simple reason that as soon as this >> account is recorded extra-cranially, it becomes >> objective and no longer first person). > > I think he's also alluding to the "Hard Problem" of consciousness. Ah, yes. > The Hard Problem refers to the fact that whatever facts are discovered > about the processes underlying consciousness it is always possible to > say, "but why should that produce consciousness?" Well, thanks for that. I had never heard that rebuttal! It seems true. Very interesting. > It's not a very helpful question if no possible answer > can ever satisfy. Yeah, if that isn't a sign of a "bad question", I don't know what is. Lee From sjatkins at mac.com Mon Jan 11 09:23:12 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 11 Jan 2010 01:23:12 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: <2A83ABA6-AD1A-4883-AED1-60B70A95018C@mac.com> On Jan 9, 2010, at 10:54 AM, Gordon Swobe wrote: > --- On Sat, 1/9/10, Ben Zaiboc wrote: > >> In 'Are We Spiritual Machines?: Ray >> Kurzweil vs. the Critics of Strong AI', John Searle says: >> >> "Here is what happened inside Deep Blue. The computer has a >> bunch of meaningless symbols that the programmers use to >> represent the positions of the pieces on the board. It has a >> bunch of equally meaningless symbols that the programmers >> use to represent options for possible moves." >> >> >> This is a perfect example of why I can't take the guy >> seriously. He talks about 'meaningless' symbols, then >> goes on to describe what those symbols mean! He is >> *explicitly* stating that two sets of symbols represent >> positions on a chess board, and options for possible moves, >> respectively, while at the same time claiming that these >> symbols are meaningless. wtf? > > Human operators ascribe meanings to the symbols their computers manipulate. Sometimes humans forget this and pretend that the computers actually understand the meanings. We manipulate symbols ourselves that have no meaning except the one we assign. Worse, what we assign to most of our symbols is actually very murky, approximate and sloppy. Worse still the largest part of our mental processes are sub-symbolic, utterly unconscious output of a very lossy, buggy, biological computer programmed in large part just well enough to survive its environment and reproduce. > > It's an understandable mistake; after all it sure *looks* like computers understand the meanings. But then that's what programmers do for a living: we program dumb machines to make them look like they have understanding. > Human programmers are the primary reasons machines are not much smarter. The conscious explicit reasoning part of our brains is used for programming. It is notoriously weak, limited and only a small recently added experimental extension slapped on top of the original architecture. We can't explicitly program beyond the rather simplistic level we can debug. It is amazing our machines are as smart as they are with such constraints. > The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" > How can you prove that you understand the meanings? > And Searle's answer is: "It won't happen from running formal syntactical programs on hardware as we do today, because computers and their programs cannot and will never get semantics from syntax. But that is just semantics! Sorry, couldn't resist. :) - samantha From sjatkins at mac.com Mon Jan 11 09:29:57 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 11 Jan 2010 01:29:57 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B48D71A.6010806@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> <4B48D71A.6010806@satx.rr.com> Message-ID: On Jan 9, 2010, at 11:20 AM, Damien Broderick wrote: > On 1/9/2010 12:48 PM, John Clark wrote: >> If a high school dropout who worked as the bathroom attendant at the zoo >> had a website and claimed to have made a major discovery about stem >> cells from an experiment described on that website I would not bother to >> read it. > > Neither would I, probably. When a biochemistry PhD and Research Fellow of the Royal Society, like Sheldrake, does so, I'd be less quick to dismiss his scientific report. Well, I have read bit of Sheldrake. The man rolled up something powerfully mind altering in his diplomas and smoked it as far as I can tell. Morphogenic fields and 100th monkey syndrome indeed. If this passes for science then I don't know why we think science can get us of the "demon haunted world". His reports are not expressed as science, are not verified by repeatable experiment, and do not fit well with existing knowledge or better explain most of what the existing knowledge has had good success explaining and making testable predictions about. I don't see that his credentials have a thing to do with it. - samantha From sjatkins at mac.com Mon Jan 11 09:35:17 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 11 Jan 2010 01:35:17 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B4A8229.4060803@satx.rr.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> <4B4A8229.4060803@satx.rr.com> Message-ID: <0B9EF00A-FE8B-4B0A-9030-E116EAA09A12@mac.com> On Jan 10, 2010, at 5:43 PM, Damien Broderick wrote: > On 1/10/2010 7:27 PM, Stathis Papaioannou wrote: > >> Gordon has in mind a special sort of understanding which makes no >> objective difference and, although he would say it makes a subjective >> difference, it is not a subjective difference that a person could >> notice. > > I have a sneaking suspicion that what is at stake is volitional initiative, conscious weighing of options, the experience of assessing and then acting. Yes, we know a lot of this experience is illusory, or at least misleading, because a large part of the process of "willing" is literally unconscious and precedes awareness, but still one might hope to have a machine that is aware of itself as a person, not just a tool that shuffles through canned responses--even if that can provide some simulation of a person in action. As I understand it a lot of the decision process is sub-/unconscious and the conscious mind rationalizes the results often. This does not mean that we are incapable of conscious logic and symbol manipulations, just that a lot of what we do isn't done that way. > It might turn out that there's no difference, once such a complex machine is programmed right, but until then it seems to me fair to suppose that there could be. None of this concession will satisfy Gordon, I imagine. No difference. If we were accidentally "programmed" to do whatever it is we do then there is no reason it could not be programmed on purpose, assuming our brains are just powerful enough. - samantha From stefano.vaj at gmail.com Mon Jan 11 11:52:48 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 11 Jan 2010 12:52:48 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <855333.87547.qm@web36505.mail.mud.yahoo.com> <4B48E373.9030607@satx.rr.com> Message-ID: <580930c21001110352v243cb942s45eba3f092f16c7e@mail.gmail.com> 2010/1/10 Stathis Papaioannou : > No-one claims that the brain is a digital computer, but it can be > simulated by a digital computer. The ideal analogue computer cannot be > emulated by a digital computer because it can use actual real numbers. > However, the real world appears to be quantised rather than > continuous, so actual analogue computers do not use real numbers. And > even if the world turned out to be continuous factors such as thermal > noise would make all the decimal places after the first few in any > parameter irrelevant, so there would be no need to use infinite > precision arithmetic to simulate an analogue device. I think there is little doubt that organic brains do compute things - besides exhibiting other features, such as burning glucides, which I take not to be of the essence in what we seek in a brain emulation. The fact that we can deal with ordinary arithmetics is a good enough example, I think. They do it rather poorly in some areas, and are much better in other. As to the digital/analog divide, I believe it has been shown well enough by Wolfram etc. that analog computers cannot do anything special that digital computers could not do. As to the quantum computing angle, organic brains are no better than digital computers in resolving classical problems which quantum computing should resolve, so supposing that they profit from quantum effects is tantamount to supposing that they profit from dark energy. What else remains to be said? Once the cerebral computations have been emulated, the issue of whether the emulation is "conscious" is not really different from wondering if it has a soul. The answer is social and cultural, not "factual". -- Stefano Vaj From gts_2000 at yahoo.com Mon Jan 11 12:42:41 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 11 Jan 2010 04:42:41 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <186811.743.qm@web36504.mail.mud.yahoo.com> --- On Sun, 1/10/10, Stathis Papaioannou wrote: > 2010/1/11 John Clark : > >> If the computer did understand the meaning you think >> the machine would continue to operate exactly as it did before, >> back when it didn't have the slightest understanding of anything. >> So, given that understanding is a completely useless property why >> should computer scientists even bother figuring out ways to make a >> machine understand??Haven't they got anything >> better to do? > > Gordon has in mind a special sort of understanding which > makes no objective difference and, although he would say it makes a > subjective difference, it is not a subjective difference that a person > could notice. Not so. The conscious intentionality I have in mind certainly does make a tremendously important subjective difference in every person. You and I have it but vegetables and unconscious philosophical zombies do not. It appears software/hardware systems also do not and cannot. John makes the point that one might argue that it does not matter from a practical point of view if software/hardware systems can or cannot have consciousness. I don't disagree, and I have no axe to grind on that subject. I don't pretend to defend strong AI research in software/hardware systems. My interest concerns the ramifications of the seeming hopelessness of strong AI research in s/h systems for us as humans. It tells us something important in the philosophy of mind. I wonder if everyone understands that if strong AI cannot work in digital computers then it follows that neither can "uploading" work as that term normally finds usage here. -gts From stefano.vaj at gmail.com Mon Jan 11 13:00:07 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 11 Jan 2010 14:00:07 +0100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <4B4ACEAF.2080102@rawbw.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> <4B4ACEAF.2080102@rawbw.com> Message-ID: <580930c21001110500j754b2a03j383fb7527da533bc@mail.gmail.com> 2010/1/11 Lee Corbin : > Look, the solution to the problem is simple, and we need > only wait patiently a few more years. Soon one will be > able to control the amount of color that children will > be born with, and not long after that people will > themselves be able to undergo whitening processes a lot > cheaper, more effective, and easier than Michael Jackson's. > Then everybody can be as white as they please, and films > will become truly equal opportunity. ... and we will be facing a dramatic loss of biodiversity. :-) It is true that "fairness" is often cross-culturally a sought-after beauty feature, but I think this may have to do, more than with some truly "universal" canon, with: - the status symbol arising from the indication that the individual concerned need not to work in the fields (interestingly, UVA salons started operations when the rich become those who spent more time outdoor than those enslaved in cavernous offices...); - the fact that everybody is somewhat fairer in its young age; - the "prestige" derived from the historical success of Europoids, not to mention the presence of their genes in the ruling classes of many areas in the world. In fact, it is customary in transhumanist circles to discuss (critically) the misdeeds of State eugenism. In truth, as pointed out by Habermas, the police need not really be around enforcing eugenic policies, since its intervention would be on the contrary required if the State ever decided to forbid parents to make use of technology to conform with social norms! This is why I think it is important, for those who are not keen on such an entropic process taking place at a global scale, to protect and foster cultural differences and a plurality of models and "optimality" views throughout different communities. Up to and beyond speciation, as far as I am concerned... ;-) -- Stefano Vaj From stathisp at gmail.com Mon Jan 11 13:04:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jan 2010 00:04:27 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <186811.743.qm@web36504.mail.mud.yahoo.com> References: <186811.743.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/11 Gordon Swobe : > I wonder if everyone understands that if strong AI cannot work in digital computers then it follows that neither can "uploading" work as that term normally finds usage here. This is true, and it's the main reason I've persevered with this thread. One day it may not be an abstract philosophical problem but a serious practical problem: you would want to be very sure before agreeing to upload that you're not killing yourself. For the reasons I've described, I'm satisfied that the philosophical problem is solved in favour of uploading and strong AI. Of course, there remains the far more difficult technical problem, and the possibility, however unlikely, that the brain is not computable. -- Stathis Papaioannou From stefano.vaj at gmail.com Mon Jan 11 13:13:38 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 11 Jan 2010 14:13:38 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B4A8229.4060803@satx.rr.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> <4B4A8229.4060803@satx.rr.com> Message-ID: <580930c21001110513v7b9beeb4k52c18c29ecb6848c@mail.gmail.com> 2010/1/11 Damien Broderick : > Yes, we know a lot of this experience is illusory, or at least misleading, > because a large part of the process of "willing" is literally unconscious > and precedes awareness, but still one might hope to have a machine that is > aware of itself as a person, not just a tool that shuffles through canned > responses--even if that can provide some simulation of a person in action. The real difference between "canned" and "ad-hoc" responses, I believe, is simply in the numbers thereof. Both are certainly finite, and I suspect it is purely a matter of accuracy if we find emulations based on a small set of the first to be rough and unsatisfactory... Even though very simple organic brains (and perhaps very stupid human beings) behave not that differently. Another, entirely different issue is whether such "brute force" emulation is really the most practical way to develop persuasively "conscious" (that is, conscious tout court) entities. I do not remember who calculated that a Chinese room would offer a couple of minutes of "consciousness" in the entire expected duration of the universe... -- Stefano Vaj From gts_2000 at yahoo.com Mon Jan 11 13:42:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 11 Jan 2010 05:42:02 -0800 (PST) Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: <4B4A2384.7030700@satx.rr.com> Message-ID: <410987.34614.qm@web36503.mail.mud.yahoo.com> --- On Sun, 1/10/10, Damien Broderick wrote: > This author seems to argue for metaphysical dualism; for the existence of a mental world distinct from the world of matter. As he writes: > but my argument is not about > technical, probably temporary, limitations. It is about the > deep philosophical confusion embedded in the assumption that if > you can correlate neural activity with consciousness, then you have > demonstrated they are one and the same thing, and that a physical science > such as neurophysiology is able to show what consciousness truly > is. In other words, he argues like Descartes that matter in the brain does not have consciousness, that it must come from somewhere else and exist in some way above, beyond or outside matter. I suppose he must think mental phenomena come from god rather than from its neurological substrate, though he never says so explicitly. > This disposes of the famous claim by John Searle, Slusser > Professor of Philosophy at the University of California, Berkeley: > that neural activity and conscious experience stand in the same > relationship as molecules of H[2]O to water, with its properties of > wetness, coldness, shininess and so on. The analogy fails as the > level at which water can be seen as molecules, on the one hand, and > as wet, shiny, cold stuff on the other, are intended to correspond > to different "levels" at which we are conscious of it. But > the existence of levels of experience or of description > presupposes consciousness. Water does not intrinsically have these > levels. Here he misrepresents or misunderstands Searle. In my reading of Searle, he uses the example of the solid state of pistons in an engine as analogous to the conscious state of the brain (I thought I invented the water analogy on my own, admittedly a poor one now that I think of it, but here Searle is said to use it too). In any case, the property of solidity does not as this author tries to argue "presuppose consciousness". Solid objects have the physical property of impenetrability no matter whether anyone knows of it. Likewise liquid states have the property of liquidity and gaseous states have the property of gases independent of consciousness, and these are the sorts of analogies that Searle *actually* makes. This author either does not know this or else hopes the reader will not, and his entire argument depends on this false characterization. As I have it, consciousness has a first-person ontology and for the sake of saving the concept it ought not be *ontologically* reduced. However it may nonetheless be *causually* reduced to its neuronal substrate, something this author has a problem with. But in fact medical doctors do this all the time when discussing drugs and cures for illnesses that affect subjective experience. I have a tooth-ache this morning for example (I really do). I can take a pill for the conscious pain, and science can rightly concern itself with explanations as to why the pill works to kill the pain. Consciousness is in this way causally reducible to its neuronal substrate, even if it makes no sense to reduce it ontologically. Much confusion arises as a result of not understanding the need for a distinction between ontological and causal reduction. When considering almost anything else in the world aside from conscious experience, we simultaneously do an ontological and a causal reduction. -gts From pharos at gmail.com Mon Jan 11 14:29:32 2010 From: pharos at gmail.com (BillK) Date: Mon, 11 Jan 2010 14:29:32 +0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <186811.743.qm@web36504.mail.mud.yahoo.com> Message-ID: On 1/11/10, Stathis Papaioannou wrote: > This is true, and it's the main reason I've persevered with this > thread. One day it may not be an abstract philosophical problem but a > serious practical problem: you would want to be very sure before > agreeing to upload that you're not killing yourself. For the reasons > I've described, I'm satisfied that the philosophical problem is solved > in favour of uploading and strong AI. Of course, there remains the far > more difficult technical problem, and the possibility, however > unlikely, that the brain is not computable. > > I don't see this as a problem unless you insist that the human body/brain *must* be destroyed during the upload/copy process. I would be very interested in having a copy of my massive intellect running in one of these new netbooks (circa 2020). I would be reorganising, tuning, rebuilding routines, patching, etc. like mad. (And you thought patching Windows was bad!). I would prefer that it didn't have any 'consciousness' features as I don't appreciate my computer whining and bitching about the work I'm doing on it. BillK From ddraig at gmail.com Mon Jan 11 07:21:52 2010 From: ddraig at gmail.com (ddraig) Date: Mon, 11 Jan 2010 18:21:52 +1100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> Message-ID: 2010/1/11 Stefano Vaj : > Neither have I, and I am quite impatient to get it on 3d blu-ray. You will watch it in 3d at home? How? Dwayne -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From bbenzai at yahoo.com Mon Jan 11 16:18:00 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 11 Jan 2010 08:18:00 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <552968.20592.qm@web113618.mail.gq1.yahoo.com> Damien Broderick wrote: On 1/10/2010 7:27 PM, Stathis Papaioannou wrote: >> Gordon has in mind a special sort of understanding which makes no >> objective difference and, although he would say it makes a subjective >> difference, it is not a subjective difference that a person could >> notice. > > I have a sneaking suspicion that what is at stake is volitional > initiative, conscious weighing of options, the experience of assessing > and then acting. Yes, we know a lot of this experience is illusory, or > at least misleading, because a large part of the process of "willing" is > literally unconscious and precedes awareness, but still one might hope > to have a machine that is aware of itself as a person, not just a tool > that shuffles through canned responses--even if that can provide some > simulation of a person in action. It might turn out that there's no > difference, once such a complex machine is programmed right, but until > then it seems to me fair to suppose that there could be. None of this > concession will satisfy Gordon, I imagine. I have a very strong suspicion that a tool that shuffles through canned responses can never even approach the performance of a self-aware person, and only another self-aware person can do that. If you program your complex machine right, you will have to include whatever features give it self-awareness, consciousness, or whatever you want to call it. In other words, philosophical zombies are impossible, because the only way of emulating a self-aware entity is to in fact be one. Ben Zaiboc From bbenzai at yahoo.com Mon Jan 11 16:21:30 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 11 Jan 2010 08:21:30 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <591989.12968.qm@web113609.mail.gq1.yahoo.com> Stathis Papaioannou wrote: 2010/1/11 Ben Zaiboc : >> Whether this information is produced by a 'real body' in the 'real world' or a virtual body in a virtual world makes absolutely no difference (after all, we may well be simulations in a simulated world ourselves. Some people think this is highly likely). ?I imagine it would lead to a pretty precise meaning for whatever internal signal, state or symbol is used for "two metres to the left". > > If we find an intelligent robot as sole survivor of a civilisation > completely destroyed when their sun went nova, we can eventually work > out what its internal symbols mean by interacting with it. If instead > we find a computer that implements a virtual environment with > conscious observers, but has no I/O devices, then it is impossible > even in principle for us to work out what's going on. And this doesn't > just apply to computers: the same would be true if we found a > biological brain without sensors or effectors, but still dreaming away > in its locked in state. The point is, there is no way to step outside > of syntactical relationships between symbols and ascribe absolute > meaning. It's syntax all the way down. No argument here about it being syntax all the way down, as long as you apply this to 'real-world' systems as well as simulations. In your example, you may be right about us not being able to understand what's going on, because we don't inhabit that level of reality (the alien sim), and have no knowledge of how it works. But so what? Just because we don't understand it, doesn't mean that the virtual mind doesn't have meaningful experiences. Presumably the sim would map well onto the original aliens' 'real reality', though, which might baffle us initially, but would be a solvable problem, meaning that the sim would also in principle be solvable (unless you think we can never decipher Linear A). In a human-created sim, of course, we decide what represents what. Having written the sim, we can understand it, and relate to the mind in there. In this case, there's no difference (in terms of meaning and experience) between a pizza being devoured on level 1 or on level 2, as long as the pizza belongs to the same reality level as the devourer, and one level is well-mapped to the other. (I am of course, talking about proper simulations, with dynamic behaviour and enough richness and complexity to reproduce all the desired features at the necessary resolution, rather than a 'Swobian simulation', such as a photograph or a cartoon). Ben Zaiboc From jonkc at bellsouth.net Mon Jan 11 16:53:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 11:53:55 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B4A1548.6050407@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> Message-ID: <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> On Jan 10, 2010, Damien Broderick wrote: >> After well over a century's effort not only have Psi >> "scientists" failed to explain how it works they haven't even shown that >> it exists. > > You *don't* know that, because you refuse to look at the published evidence## THAT IS NOT EVIDENCE THAT IS TYPING. There is no point in me looking at it I already know what it's going to say: I set up the experiment this way, I had these controls, all the people were honest and competent, I got these amazing results, and I was really really really really careful. The trouble is I have no way of knowing if one word of that is true, I don't even have a way of know if there was a experiment done at all, for all I know it's just an exercise in typing. > the usual objection to psi from heavy duty scientists at the Journal of Recondite Physics and Engineering is, "We don't care about your anomalies, because you don't have a *theory* that predicts and explains them." Damien, I think even you know that is Bullshit. Nobody has a theory worth a damn explaining the acceleration of the entire Universe and nobody predicted it, yet the fact of its existence is accepted by those foolish conservative stick in the mud mainstream scientists because the evidence is just so good. Before it was observed nobody predicted the existence of Dark Matter even though it is by far the most abundant form of matter in the universe, and to this day nobody can explain it, nevertheless those silly mainstream scientists believe it exists because the evidence is just so good. Nobody predicted X Rays before R?ntgen discovered them and neither he nor anybody else had a theory to explain them, but he became the most lionized physicist of his day and received the very first Nobel Prize in Physics. And even Darwin's Theory of Evolution, which has about as much emotion and prejudice aimed against it as it's possible for a Scientific theory to have, was accepted by the mainstream scientific community in only about a decade, and when he died Darwin was given a hero's funeral and buried in Westminster's Abbey right next to Newton. You are asking us to believe that third string experimenters using equipment that cost almost nothing can detect a vital new fact about our universe, and have actually been doing exactly that for centuries; and yet in all that time not one world class experimenter can repeat the feat and allow the fact of Psi to become generally accepted, and you expect this grotesque situation will continue for at least another year, hence your refusal to take my bet. Damien that just is not credible. > I *suspect* the same might be true of "cold fusion." I too suspect the same is true for cold fusion, more than suspect actually. > what happened in 2001 when the Royal Mail in Britain published a special brochure to accompany their issue of special stamps to commemorate British Nobel Prize winners. Dr. Brian Josephson, Nobel physics laureate in 1973, took the opportunity to draw attention to anomalies research: ?Quantum theory is now being fruitfully combined with theories of information and computation. These developments may lead to an explanation of processes still not understood within conventional science such as telepathy In the last 9 years microprocessors have become about 65 times as powerful, it was found that the Universe is expanding, and a probe was sent to Pluto, in the last 9 years what new advances have occurred in "these developments" that Josephson speaks of? Zero, nada, zilch, goose egg. > Josephson responded in the Observer newspaper on October 7, 2001: > > The problem is that scientists critical of this research do not give their normal careful attention to the scientific literature on the paranormal: it is much easier instead to accept official views or views of biased skeptics . . . Obviously the critics are unaware that in a paper published in 1989 in a refereed physics journal, Fotini Pallikari and I demonstrated a way in which a particular version of quantum theory could get round the usual restrictions against the exploitation of the telepathy-like connections in a quantum system. Another physicist discovered the same principle independently; so far no one has pointed out any flaws. I assume that Josephson is talking about the short paper "Biological Utilisation of Quantum NonLocality", well.... to find a flaw in something the thing in question must have some substance. That paper makes no predictions, suggests no new experiments, and contains not a single equation; its just a bunch of vague philosophical musings > An academic and science correspondent for the London Sunday Telegraph, Robert Matthews, commented sharply in November 1991: > "there is now a wealth of evidence for the existence of ESP" In the last 19 years microprocessors have become about 6500 times as powerful, how much has this "wealth of evidence" in support of ESP increased in that time? Zero, nada, zilch, goose egg. And who the hell is Robert Matthews? > It turns out quantum theory is right, Einstein?s wrong and that particles or systems that are in part of the same system, when apart, retain this nonlocal connection . . . If quantum theory is truly fundamental, then we may be seeing something analogous, even homologous, at the level of organisms. Insofar as people are thinking theories of telepathy, then this is one of the prime contenders. That is incorrect. Yes non local connections exist and yes you can instantly change something on the other side of the universe, but you can't use that fact to send information and that's what you'd need to do to make telepathy work. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 11 16:41:45 2010 From: spike66 at att.net (spike) Date: Mon, 11 Jan 2010 08:41:45 -0800 Subject: [ExI] i have been anticipating this development for years Message-ID: <6161400420EB4CA0BB6F51C2EBA1ACB7@spike> Not that I would buy one for myself, but in principle you understand: http://www.foxnews.com/scitech/2010/01/11/worlds-life-size-robot-girlfriend/ ?test=faces spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Mon Jan 11 17:41:52 2010 From: max at maxmore.com (Max More) Date: Mon, 11 Jan 2010 11:41:52 -0600 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto Message-ID: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> Presumably Emlyn and some others here will strongly disagree with Lanier's new book -- at least based on the interview included on the Amazon page... http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647/ref=pe_37960_14063560_as_txt_1/ From that interview, his views are worth pondering, but he does seem to be excessively anti-Web 2.0/collective wisdom. Max From lcorbin at rawbw.com Mon Jan 11 18:30:49 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 11 Jan 2010 10:30:49 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> Message-ID: <4B4B6E59.7020205@rawbw.com> Stathis Papaioannou wrote: > 2010/1/11 John Clark : > >> If the computer did understand the meaning you think the machine would >> continue to operate exactly as it did before, back when it didn't have the >> slightest understanding of anything. So, given that understanding is a >> completely useless property why should computer scientists even bother >> figuring out ways to make a machine understand? Haven't they got anything >> better to do? > > Gordon has in mind a special sort of understanding which makes no > objective difference and, although he would say it makes a subjective > difference, it is not a subjective difference that a person could > notice. Well, no, those who wish to make the case for the possible existence of zombies are making a far stronger claim than that; they're claiming that there wouldn't even be a "noticer" at all. I.e., rather than any subjective difference, they posit simply there being no subject. To me, the claim in not at all incoherent, merely extremely unlikely. And a variation on what John Clark says above is this old argument: if "true understanding" doesn't make any difference, then why did evolution bother to manufacture it? Lee From steinberg.will at gmail.com Mon Jan 11 18:30:45 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 13:30:45 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> Message-ID: <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> A possible method for deducing the entanglement of particles: A "neuron trap" contains an ion, with spin x and a magnetic dipole d. If the spin reverses, the dipole will reverse and cause polarization of the neuron, which could be connected in parallel to another neuron, which would be connected to a larger analysis system; if the neuron pair shows polarization from within, the ion can be either integrated into molecules for transport to the entanglement zone or chelated and moved with some endocrine stuff. I don't know whether these entanglements could be actually understood or coordinated, but I think it would be true that, perhaps through atmospheric transmission, entangled gas atoms accrue between people in close physical contact and are all stored in the entanglement bank; when the system realizes many of its atoms are changing simultaneously based on the other person's (speculation speculation speculation,) it understands it. Maybe the system is wired to think, say, in binary, and constantly "broadcasts" a sequential, morse code-like binary throughout all entangled atoms and systems, but only receivable if one has the necessary amount of entanglement, furthered by contact. Would explain familial TP and all that stuff, and sometimes random TP because of chaotic coincidences. That makes sense and could be explained to have arisen through early humans or even animals; a pattern of X would make the tribe run away. Maybe language itself is an outpouring of this mental language where the simplicity of said language could not commonly express complicated concepts. The brain can do things that are much, much more complicated, like PRODUCE QUALIA AND CONSCIOUSNESS. So...this seems possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Jan 11 18:31:34 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 13:31:34 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> Message-ID: <4e3a29501001111031j1402e5e8h6ec4e2cb00f724cb@mail.gmail.com> Also to be noted--this is experiment-friendly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Jan 11 18:42:49 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 11 Jan 2010 13:42:49 -0500 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto In-Reply-To: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> References: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> Message-ID: On Mon, Jan 11, 2010 at 12:41 PM, Max More wrote: > Presumably Emlyn and some others here will strongly disagree with Lanier's > new book -- at least based on the interview included on the Amazon page... > > http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647/ref=pe_37960_14063560_as_txt_1/ > > From that interview, his views are worth pondering, but he does seem to be > excessively anti-Web 2.0/collective wisdom. >From the interview: "Collectivists adore a computer operating system called LINUX, for instance, but it is really only one example of a descendant of a 1970s technology called UNIX. If it weren?t produced by a collective, there would be nothing remarkable about it at all." Nobody is arguing that Linux's design is innovative, so, yes, what's remarkable about it is that it was produced by a collective. That's like saying there'd be nothing remarkable about Mozart if he wasn't a composer. "Meanwhile, the truly remarkable designs that couldn?t have existed 30 years ago, like the iPhone, all come out of "closed" shops where individuals create something and polish it before it is released to the public. Collectivists confuse ideology with achievement." Now that's just bullshit. The iPhone is a *good* design, but it's not remarkable. It's the almost inevitable result of a long string of technological developments. Apple's "closed shop" enabled it to beat the "collectivist" competition to market, but not by much. -Dave From thespike at satx.rr.com Mon Jan 11 19:31:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jan 2010 13:31:17 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> Message-ID: <4B4B7C85.5030600@satx.rr.com> On 1/11/2010 12:30 PM, Will Steinberg wrote: > Maybe language itself is an outpouring of this mental language where the > simplicity of said language could not commonly express complicated > concepts. Possibly something like that. The experience from Star Gate and elsewhere suggests strongly that psi does not handle detailed alphanumerics well, and often flips images right to left; the main feature is a powerful correlation with entropy gradients in the target. This is why remote viewing and Ganzfeld protocols are more effective in eliciting informative coincidences than the classic boring card-guessing experiments. In a sense, psi is *all* semantics--more paradigmatics and less syntagmatics (in the terminology of semiotics).## Damien Broderick ## e.g. From thespike at satx.rr.com Mon Jan 11 19:48:28 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jan 2010 13:48:28 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> <4B48D71A.6010806@satx.rr.com> Message-ID: <4B4B808C.8050204@satx.rr.com> On 1/11/2010 3:29 AM, Samantha Atkins wrote: > Well, I have read bit of Sheldrake. The man rolled up something powerfully mind altering in his diplomas and smoked it as far as I can tell. Morphogenic fields and 100th monkey syndrome indeed. If this passes for science then I don't know why we think science can get us of the "demon haunted world".< I agree. Hence my comparison with Newton's alchemy etc. But do try to decouple the zany attempts to explain from the careful (or sloppy, if that's demonstrably the case) experiments. > His reports are not expressed as science,< What do you object to in the phone experiments I cited? They seem very clear, clean, objective. Unless he and all the participants were just making it up. > are not verified by repeatable experiment,< Of course they are, and have been. > and do not fit well with existing knowledge or better explain most of what the existing knowledge has had good success explaining and making testable predictions about.< Yes, that's the real problem, in my view. But now you've slipped back to invoking his half-baked hypotheses rather than the empirical evidence of his experiments (and those of others replicating them, or which he's replicating). Many skeptics are delighted to hear that Randi has totally debunked the claims that some dogs have a significantly higher than chance likelihood of knowing when their humans are coming home, even when those arrivals are scheduled randomly. Randi declared that his own experiments had shown this was BULLSHIT, and that Sheldrake's were bogus. Well, he did until it was demonstrated (and he finally admitted) that he actually *hadn't* ever done such trials himself, and he was wrong about Sheldrake's. I don't have a strong opinion about the claim one way or the other, but I'm always amused and astonished by the way professional doubters like Randi just *make stuff up* and get away with it. It's precisely what John Clark asserts about the psi claimants--such people can talk and type, and that's it. The locus classicus was the sTARBABY debacle (it's easy to look it up), where again the substantive issue is irrelevant; what's salient is the way CSICOPS scrambled to deny and hide their own confirmatory findings. Strong opinions on both sides often lead to wild bogosity; it's not just the Sheldrakes who need to be tested for probity. Damien Broderick From jonkc at bellsouth.net Mon Jan 11 20:56:31 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 15:56:31 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> Message-ID: <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> On Jan 11, 2010, Will Steinberg wrote: > Maybe the system is wired to think, say, in binary, and constantly "broadcasts" a sequential, morse code-like binary throughout all entangled atoms and systems That won't work because quantum entanglement can't transmit information, it can change things at a distance but you need more than that to send a message, you also need a standard to measure that change against, and that is where entanglement falls short. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 11 21:07:03 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jan 2010 15:07:03 -0600 Subject: [ExI] quantum entanglement In-Reply-To: <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> Message-ID: <4B4B92F7.2090903@satx.rr.com> On 1/11/2010 2:56 PM, John Clark wrote: > quantum entanglement can't transmit information, it can change things at > a distance but you need more than that to send a message, you also need > a standard to measure that change against, and that is where > entanglement falls short. Isn't the message problem that you can't *force* a predictable change upon part of an entangled system? If A's particle spin is up, then B's is down, okay, you know that--but A can't *make* her particle go spin up when she wants it to without breaking the entanglement. No? Damien Broderick From steinberg.will at gmail.com Mon Jan 11 21:13:56 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 16:13:56 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> Message-ID: <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> 2010/1/11 John Clark > On Jan 11, 2010, Will Steinberg wrote: > > That won't work because quantum entanglement can't transmit information, it > can change things at a distance but you need more than that to send a > message, you also need a standard to measure that change against, and that > is where entanglement falls short. > The observer effect can still effect (I had to) information, just not information with any pushing power--but this is not needed for communication. If, because fatalism hold true, the entangled qubits produce an informational response within the brain, that same response can be predicted given a standard-ish "mental language." I may not be able to cause a particle to spin, but I think many (Especially you, John) would agree that "I" can not cause my hands to move or a sentence to form, given causality. We are already energic outbursts of little bits and bots trying to wiggle their way back to entropy, formed only by the motion of energy through us like an electron transport chain. The brain makes decisions and THEN we interpret them; this is fact; this has been tested using fMRI. Extension to another observational medium can't be too hard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Jan 11 21:19:54 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 16:19:54 -0500 Subject: [ExI] quantum entanglement In-Reply-To: <4B4B92F7.2090903@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4B4B92F7.2090903@satx.rr.com> Message-ID: <4e3a29501001111319j12811ea4v857051e55f98ba61@mail.gmail.com> On Mon, Jan 11, 2010 at 4:07 PM, Damien Broderick wrote: > On 1/11/2010 2:56 PM, John Clark wrote: > > quantum entanglement can't transmit information, it can change things at >> a distance but you need more than that to send a message, you also need >> a standard to measure that change against, and that is where >> entanglement falls short. >> > > Isn't the message problem that you can't *force* a predictable change upon > part of an entangled system? If A's particle spin is up, then B's is down, > okay, you know that--but A can't *make* her particle go spin up when she > wants it to without breaking the entanglement. No? > > Right, which is why Psi has to be based on observation of a root signal causing identital changes or predictions in one or more people, maybe leading to drastic conclusions on truly "random" mental occurences governing our thoughts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 11 21:49:29 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 16:49:29 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> Message-ID: <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> On Jan 11, 2010, Will Steinberg wrote: > The observer effect can still effect (I had to) information, just not information with any pushing power--but this is not needed for communication. I don't understand what that means. > I think many (Especially you, John) would agree that "I" can not cause my hands to move or a sentence to form Actually I don't agree. As I said before I think that saying that I scratched my nose because I wanted to is a perfectly correct way to describe the situation, as is saying the balloon expanded because the pressure inside it increased, its just that those are not the only way to describe what is going on. "I" is a high level description of what a hundred billion neurons are doing, and "pressure" is a high level description of what millions of billions of trillions of atoms are doing. > given causality That is not a given, modern Physics says causality is bunk. And after all, why should all events have a cause? John k Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 11 21:26:12 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 16:26:12 -0500 Subject: [ExI] quantum entanglement In-Reply-To: <4B4B92F7.2090903@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4B4B92F7.2090903@satx.rr.com> Message-ID: <8561859D-33F3-4253-ADF4-786AB9AE8E9A@bellsouth.net> On Jan 11, 2010, Damien Broderick wrote: > Isn't the message problem that you can't *force* a predictable change upon part of an entangled system? Yes. Suppose that you and I had a pair of dice that were entangled in a Quantum Mechanical way, you are in Australia and I am in the USA. We both roll our dice and we write down the numbers, after hundreds of rolls we both examine the numbers and we both conclude that they agree with the laws of probability and nothing unusual has occurred. However if I get on a jet and fly to Australia and show you my list of numbers we find that my list is identical with your list. That is weird, but changing one apparently "random" event to another apparently "random" event is no way to send a message. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 11 21:58:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jan 2010 15:58:14 -0600 Subject: [ExI] quantum entanglement In-Reply-To: <8561859D-33F3-4253-ADF4-786AB9AE8E9A@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4B4B92F7.2090903@satx.rr.com> <8561859D-33F3-4253-ADF4-786AB9AE8E9A@bellsouth.net> Message-ID: <4B4B9EF6.7090506@satx.rr.com> On 1/11/2010 3:26 PM, John Clark wrote: > However if I get on a jet and fly to Australia and show you my list of > numbers we find that my list is identical with your list. That is weird That *is* weird, because I'm in San Antonio and have been for years. :) From steinberg.will at gmail.com Mon Jan 11 22:10:19 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 17:10:19 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> Message-ID: <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> I don't even care about whether things have a cause, just that everything that happens causes something. For example (this one is probably false but illustrates the idea:) The entanglement reading system is built. It so happens that a bunch of entangled atoms spin a certain way in many different brains. Two people, Isaac Newton and Gottfried Leibniz, have brains that are wired in similar manners because of their upbringings and work. Patterns produced by random spins, reshuffled at breakneck speeds, happens to, for a nanosecond, chance upon something that is roughly equal to "understand calculus!" in brainguage. The random spins can, because of their quantity, chance upon certain concepts that are picked up by multiple people. A bit spins in my brain and also spins in yours that tells us what lists to write; I don't send mine to you but they have the same root cause. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 11 22:14:47 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 17:14:47 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <186811.743.qm@web36504.mail.mud.yahoo.com> Message-ID: <32D6B15A-F581-4DE2-A889-B1538B627A8B@bellsouth.net> On Jan 11, 2010 Stathis Papaioannou wrote: > One day it may not be an abstract philosophical problem but a > serious practical problem Truer words have never been spoken. I think my chance of surviving the meat grinder called "The Singularity" is very low, almost zero. But the chance of someone surviving The Singularity who has not overcome the soul superstition is exactly zero, regardless of what euphemism they prefer for the word soul. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Mon Jan 11 22:18:27 2010 From: scerir at libero.it (scerir) Date: Mon, 11 Jan 2010 23:18:27 +0100 (CET) Subject: [ExI] Psi (no need to read this post you already know what it says) Message-ID: <14435781.528241263248307850.JavaMail.defaultUser@defaultHost> Will Steinberg: >A possible method for deducing the entanglement of particles: [...] To my knowledge the possible (but questionable) role of entanglement in physiology (i.e. human eye) has been discussed in very very few papers written by *good* physicists. The interest in entanglement depends on recent experiments with two macroscopic states (localized in two space-like separated sites) which become non-locally correlated having interacted - in the past - with an entangled couple of single-particles (micro-macro entanglement). Quantum experiments with human eyes as detectors based on cloning via stimulated emission http://arxiv.org/abs/0902.2896 -Pavel Sekatski, Nicolas Brunner, Cyril Branciard, Nicolas Gisin, Christoph Simon Abstract: We show theoretically that the multi-photon states obtained by cloning single-photon qubits via stimulated emission can be distinguished with the naked human eye with high efficiency and fidelity. Focusing on the "micro-macro" situation realized in a recent experiment [F. De Martini, F. Sciarrino, and C. Vitelli, Phys. Rev. Lett. 100, 253601 (2008)], where one photon from an original entangled pair is detected directly, whereas the other one is greatly amplified, we show that performing a Bell experiment with human-eye detectors for the amplified photon appears realistic, even when losses are taken into account. The great robustness of these results under photon loss leads to an apparent paradox, which we resolve by noting that the Bell violation proves the existence of entanglement before the amplification process. However, we also prove that there is genuine micro-macro entanglement even for high loss. Towards Quantum Experiments with Human Eyes Detectrors Based on Cloning via Stimulated Emission? http://arxiv.org/abs/0912.3110 -Francesco De Martini Abstract: We believe that a recent, unconventional theoretical work published in Physical Review Letters 103, 113601 (2009) by Sekatsky, Brunner, Branciard, Gisin, Simon, albeit appealing at fist sight, is highly questionable. Furthermore, the criticism raised by these Authors against a real experiment on Micro - Macro entanglement recently published in Physical Review Letters (100, 253601, 2008) is found misleading and to miss its target. Quantum superpositions and definite perceptions:envisaging new feasible experimental tests http://arxiv.org/abs/quant-ph/9810028 -GianCarlo Ghirardi Abstract: We call attention on the fact that recent unprecedented technological achievements, in particular in the field of quantum optics, seem to open the way to new experimental tests which might be relevant both for the foundational problems of quantum mechanics as well as for investigating the perceptual processes. From jonkc at bellsouth.net Mon Jan 11 22:23:59 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 17:23:59 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> Message-ID: <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> On Jan 11, 2010, Will Steinberg wrote: > I don't even care about whether things have a cause, just that everything that happens causes something. On that I think you are on much firmer ground. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Jan 11 22:32:01 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 17:32:01 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> Message-ID: <4e3a29501001111432r3d1910b5nb8914d556d9d8857@mail.gmail.com> > I don't even care about whether things have a cause, just that everything > that happens causes something. > > > On that I think you are on much firmer ground. > > John K Clark > > And from this ground I stand and look out on undiscovered pastures. When I chance to have a "synchronous" moment with another human being, something happens that *feels* like knowing, and discounting intuition is discounting subconscious understanding. I used to run TP tests with some friends using at first six words and then nine; results were incredible until we started rolling dice. But they were still interesting, just not to the point where I could statistically verify, especially given trials. But there is something that happens--when I knew I was going to know, it felt stronger, like a voice shouting "Orange!" in the back of my head. It would be wise to record, in experiments, before the answer is given, whether the person feels like it is right or just guessing; it is my prediction that results would show interesting correlations. If anyone is interesting in conducting well-thought out TP studies (perhaps not perfectly experimental but enough to get a glimpse of the interesting,) I do pretty much nothing all the time and would be happy to further the bounds of knowledge, whether this means proof or disproof. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 11 22:55:50 2010 From: spike66 at att.net (spike) Date: Mon, 11 Jan 2010 14:55:50 -0800 Subject: [ExI] quantum entanglement In-Reply-To: <4B4B9EF6.7090506@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4B4B92F7.2090903@satx.rr.com><8561859D-33F3-4253-ADF4-786AB9AE8E9A@bellsouth.net> <4B4B9EF6.7090506@satx.rr.com> Message-ID: <7A7C562B81054DB19547A7874805DB7F@spike> > ...On Behalf Of Damien Broderick ... > > However if I get on a jet and fly to Australia... > > ...I'm in San Antonio and have been for years. :) You can take the mate out of Australia, but... From pharos at gmail.com Mon Jan 11 23:59:34 2010 From: pharos at gmail.com (BillK) Date: Mon, 11 Jan 2010 23:59:34 +0000 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111432r3d1910b5nb8914d556d9d8857@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> <4e3a29501001111432r3d1910b5nb8914d556d9d8857@mail.gmail.com> Message-ID: On 1/11/10, Will Steinberg wrote: > > When I chance to have a "synchronous" moment with another human being, > something happens that feels like knowing, and discounting intuition is > discounting subconscious understanding. I used to run TP tests with some > friends using at first six words and then nine; results were incredible > until we started rolling dice. You need the dice. People are very bad at trying to choose digits or colours so as to make a random list. That's why when people see a string of heads, heads, heads, heads, heads - they almost automatically say 'the next one *must* be tails'. Two people trying to guess at random will produce far more matches than you would expect by chance alone purely because the guessing mechanism in their brains is very similar and is not random. < But they were still interesting, just not to > the point where I could statistically verify, especially given trials. But > there is something that happens--when I knew I was going to know, it felt > stronger, like a voice shouting "Orange!" in the back of my head. That's called self-delusion. Gambling addicts suffer from it a lot. BillK From steinberg.will at gmail.com Tue Jan 12 00:20:40 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 19:20:40 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> <4e3a29501001111432r3d1910b5nb8914d556d9d8857@mail.gmail.com> Message-ID: <4e3a29501001111620o71d17804r8652bf568a2f9ddc@mail.gmail.com> On Mon, Jan 11, 2010 at 6:59 PM, BillK wrote: > Two people trying to guess at random will produce far more matches > than you would expect by chance alone purely because the guessing > mechanism in their brains is very similar and is not random. > > Which is why there is merit in the fact that random entanglement patterns can produce similar mental patterns that seem like communication, but are actually just prediction, and are what we cal psi or synchronicity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jan 12 01:24:42 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jan 2010 12:24:42 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <591989.12968.qm@web113609.mail.gq1.yahoo.com> References: <591989.12968.qm@web113609.mail.gq1.yahoo.com> Message-ID: 2010/1/12 Ben Zaiboc : > Presumably the sim would map well onto the original aliens' 'real reality', though, which might baffle us initially, but would be a solvable problem, meaning that the sim would also in principle be solvable (unless you think we can never decipher Linear A). > > In a human-created sim, of course, we decide what represents what. ?Having written the sim, we can understand it, and relate to the mind in there. No, we can never even in principle decipher Linear A unless we have some clue extraneous to the actual texts. We could map any meaning to it we want. It need not even be consistent: there is no reason why the creators of Linear B, hoping to befuddle future archaeologists, could not have made the same symbol mean different things in different parts of the text. A text or code has no life of its own. It's of trivial interest that multiple meanings could be attached to it: the important thing is to work out the originally intended meaning. With computations, however, the situation may be different. If it is an inputless program the meaning ascribed to it by the original programmer has no magical potency to affect it; any other meaning that could possibly be mapped to it, including a meaning that changes from moment to moment, is just as good. This means that any physical activity which could, under some mapping, be seen as implementing a computation is implementing that computation. Like interpreting a text according to an arbitrary encoding this is a trivial observation in general, but it becomes interesting when we consider computations that create their own observers in virtual environments. -- Stathis Papaioannou From steinberg.will at gmail.com Tue Jan 12 01:50:52 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 20:50:52 -0500 Subject: [ExI] Psychoactives Message-ID: <4e3a29501001111750t322331ffoabc2dda69413fb98@mail.gmail.com> How does everyone here feel about the use of psychoactives to achieve novel thinking patterns? Even discounting medical applications (DXM for Downs, Cannabis for cancer, mushrooms for migraines, LSD for OCD,) psychoactives play an important role in human thought, especially when it comes to understanding systemic or logical concepts--they are ideal tools for finding solutions, and factored into the double-helix discovery and the invention of the PCR, as wall as numerous computer science dealies. Marijuana tips the scales of the mind towards creativity and allows for new thought processes; tryptamines and phenethylamines and some weird guys like Salvinorin A blow it out of the water completely and introduce the mind to pure possibility. Given that many of you were teenagers right around when this stuff was HUGE, I would think that some of your jaded adolescent minds latched onto Leary and found that, while the assumed prophets of drugs might be spinning their wheels about DMT-world talking basketballs, psychoactives themselves could be a useful tool in ideation, for the simple fact that they DO make new mental patterns and connection and that, given the brain's logical system, new patterns and equations means new possibility of understanding--visualizing 4D objects is only as difficult as the brain producing visual equations compatible with a four-coordinate system (or however it does the stuff it does.) Do you smart folk have your own psychonautic (though the term has been adopted by some of the sillier people) agendas? Or are there still limits in this slice of intelligentsian delight? (I hope not) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Jan 12 02:17:35 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 11 Jan 2010 18:17:35 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <436311.9101.qm@web36505.mail.mud.yahoo.com> --- On Mon, 1/11/10, Stathis Papaioannou wrote: >> I wonder if everyone understands that if strong AI >> cannot work in digital computers then it follows that >> neither can "uploading" work as that term normally finds >> usage here. > > This is true, and it's the main reason I've persevered with this thread. I think you deserve a medal for your sincere and honest efforts in pursuing these questions with me. Thank you. I wonder if digital computers will ever appreciate these kinds of things that I appreciate about you. -gts From emlynoregan at gmail.com Tue Jan 12 04:18:20 2010 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 12 Jan 2010 14:48:20 +1030 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto In-Reply-To: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> References: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> Message-ID: <710b78fc1001112018x18bb39aoa679539ff87a5db0@mail.gmail.com> 2010/1/12 Max More : > Presumably Emlyn and some others here will strongly disagree with Lanier's > new book -- at least based on the interview included on the Amazon page... > > http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647/ref=pe_37960_14063560_as_txt_1/ > > From that interview, his views are worth pondering, but he does seem to be > excessively anti-Web 2.0/collective wisdom. > > Max Wow, what a depressing interview. It's very "Hey You Kids Get Off My Lawn!" I saw this post, and read the link, before my morning coffee, and actually started running around the house shouting, it made me so angry. I've tried to calm down before posting :-) > "Question: You argue the web isn?t living up to its initial promise. How has the internet transformed our lives for the worse? > Jaron Lanier: The problem is not inherent in the Internet or the Web. Deterioration only began around the turn of the century with the rise of so-called > "Web 2.0" designs. These designs valued the information content of the web over individuals. It became fashionable to aggregate the expressions of > people into dehumanized data. There are so many things wrong with this that it takes a whole book to summarize them. Here?s just one problem: It > screws the middle class. Only the aggregator (like Google, for instance) gets rich, while the actual producers of content get poor. This is why newspapers > are dying. " Aggregator is bullshit business speak, a reframing to make people providing a service look bad. What he means is search engines. Why is all the traffic going through them? Because it's a better way to find stuff. The real problem here is that the grouping we call a newspaper is irrelevant; it's a centuries old invention about distributing information on paper (there's a clue in the name, news*paper*). I recently read a marketing person talking about what's happening as the debranding of content; no matter how hard the marketers try, people (who they like calling consumers) just wont go to brands first and find things from there, they insist on going to the faceless search engines and searching everything all at once. It turns out people just don't care about these brands. Newspapers are dying because the heavily optimised business model is broken and unrepairable. The search engines actually send traffic to the newspapers *for free*. They don't have to do that. Here's some more on this: http://publishing2.com/2007/05/29/should-google-subsidize-journalism/ Actually "aggregator" is a good term for newspapers. They take a lot of unrelated things and aggregate them. Their business model relies on this, and to the extent that this is undermined they are screwed. What users are doing is ignoring the aggregation (a whole newspaper) and cherry picking the good stuff; they are being disaggregated. Incidentally, this is what is also hurting the music industry (you can only sell songs now, not albums), and is why the movie business is doing well (you can't disaggregate a movie). As to the middle class, well, if you are working in an obsolete job, get a clue and do something else. If there is a systemic inability for you to be able to do any middle class work at all, maybe the system needs changing? But I talk about this more below. > It might sound like it is only a problem for creative people, like musicians or writers, but eventually it will be a problem for everyone. When robots > can repair roads someday, will people have jobs programming those robots, or will the human programmers be so aggregated that they essentially > work for free, like today?s recording musicians? Web 2.0 is a formula to kill the middle class and undo centuries of social progress." (more running and shouting) This isn't even wrong, it's so bad. Yes, there's a problem coming up, it's the SF dream of years ago, rushing up to smash us in the face; the dream of automating away all labour. We've already automated agricultural work away, at least in the developed world, and manufacturing is going that way (largely done or offshored in the developed world, and I read bits and pieces here and there that chinese manufacturing is beginning to automate heavily rather than add people, for instance). But I suspect Lanier doesn't mourn those peasant and working class jobs going by the wayside. The internet is a *fundamental breakthrough*. Web 2.0 is just a bit more unfolding of that. I think we will probably dedicate most of the 21st century to this continuing unfolding, and it'll be upheaval all the way. Upheaval doesn't mean, hey, stuff will be interesting and fun and you'll be able to buy cooler iPhones, it means that stuff we take for granted as fixed will change. Now I think he's conflated two issues in the paragraph above. First, that creative people's jobs are threatened. Second, that all work will disappear. That creative people's jobs are threatened now is true to an extent. There are two major types of work threatened here, one apparently threatened but probably actually strengthened, and one actually threatened. The first one is high profile people/orgs who live by creating some "content" and selling copies ad infinitum. There is a lot of wringing of hands over this, but in fact there's no evidence that anyone is suffering. It is true that people's business models may need to change, but it's not all that hard in fact. The clever people are noticing that if you separate out the scarce from the non-scarce stuff (copies are non-scarce, while scarce are personalised things, timely things such as events, automated server based services, etc), then you can use the non-scarce stuff to get reputation, and reputation can be used to sell the scarce stuff. So famous musicians now make money from touring and use the recording to sell the touring, rather than the other way around, for instance. In the end, this is a small group of people, running businesses which are not exempt from environmental shifts, but who have places they can shift into and be just fine. The second, actually threatened group, is people doing massively duplicated and substitutable creative work. These are usually again from a 20th century business model, and were born of the tyranny of geography, requiring the same work to be replicated over and over in different locales. Examples are newspaper photographers, some types of graphic design, some types of media monitors?, many of the non-journalist creative jobs in newspapers (eg: advice columns, horoscopes, laying out the classified ads). Also, it's not always geography, but commercial pricing and the inability or unwillingness of most commercial entities to work together or make interoperable stuff, that gives us other groups; people who write dictionaries or encyclopedias, many types of packaged software development, and in-house development which is in packaged software space. These people's jobs are being destroyed in the 21st century. Free/cheap stock photo collections continue to hammer the mundane body of photography, graphic design is downloadable so there is less market for the low end, media is endlessly aggregated online. All the pieces of newspapers that aren't journalism are better done online, and don't need to be redone over and over. Wikipedia eats the encyclopedias, dictionaries are replaced by online equivalents, packaged software is eaten by open source. (Next on the chopping block: Universities, whose cash cow, the undergrad degree, will be replaced with cheap/free alternative, and scientific journals, which are much better suited by free online resources if the users can just escape the reputation network effect of the existing closed journals) The only real threatened jobs are where people are doing low value crap. Padding. High value stuff will remain. For example, to the extent that journalists are actually useful (and this is highly arguable; journalists are generalists, good at making it look like they know more than they do, in an age where we can see the primary sources and hear direct from the experts), they will be preserved, if they provide a service that can't be substituted. Eg: local reporting might be pulled together under umbrella organisations who monetize that somehow, but more likely it'll be crowdsourced from the actual local people. But you can't pad any longer. You can't make an album of 2 hits and 13 pieces of crap; no one will buy the crap. People will read your great articles, but you can't sell a whole newspaper which is mostly pointless crap and ads (a great analysis by Clay Shirky here: http://www.shirky.com/weblog/2009/10/rescuing-the-reporters/). You can sell blockbuster movies, but the poor quality filler ones will bomb like never before, people know it sucks even before release. You can sell a great novel, but the market for 20 volume tolkeinesque extruded product will eventually fail (books are a bit behind the curve due to the slow take up of eBooks, but that's happening now). You can sell fantastic innovative software like photoshop, or sibelius, but you can't endlessly resell your office applications which have become commoditised, and you can no longer make money making cruddy little CD burning apps which should be freely available utilities. Back to the paragraph above, he then mentions robots taking away all the work. Well, that's been a dream for a long time. And the problem is not that people will miss the work, it is how will we live, ie: get money, without jobs? Good question, a really big question that needs answering. But paid work isn't good in itself; it's by definition stuff you do because you are being paid, and probably wouldn't do otherwise, a necessary evil. It's really hard to support holding on to the concept of paid work if it stops being needed for production. Finally though, it's a huge leap from uninspired duplicated and substitutable paid creative work being in trouble, to all jobs will disappear. Between here and there is such a long, windy, obscured path that the one can't shed light on the other. > Question: You say that we?ve devalued intellectual achievement. How? > Jaron Lanier: On one level, the Internet has become anti-intellectual because Web 2.0 collectivism has killed the individual voice. Rubbish. The Web is full of individual voices. Web 2.0 collectivism (eg: wikipedia) is small compared to the number of articles, blog posts, comments, which have an author. It's just that it's a big world, with a *lot* of individual voices, so getting a mass audience is tougher. Also, many of the individual voices are new ones who are being heard. I wish I could find Dr Ben Goldacre's article on this, where he talks about the hopelessness of science journalism, and the way you can now get your information directly from the researchers, because they're blogging about it, for free. A good example is the blog of Fields medalist Terrance Tao: http://terrytao.wordpress.com/ > It is increasingly disheartening to write about any topic in depth these days, because people will only read what the first link from a search engine > directs them to, and that will typically be the collective expression of the Wikipedia. Why do they only read that? Because mostly we just need a good, impartial summary and Wikipedia does that wonderfully. But nothing is stopping anyone from writing. It's easier to publish than ever before. So that can't be the objection. The objection above is actually that it is hard to be read. I propose that it is actually no more difficult to be read now than it ever was. It just looks that way to people who are used to having someone publish their work, because they had broken through what was previously the most difficult boundary; the publishing industry gatekeepers. Now that this is no longer as relevant, of course there is more to read, so more competition to be heard amongst all that. But that's no more difficult for writers and potential writers overall, just better for readers. > Or, if the issue is contentious, people will congregate into partisan online bubbles in which their views are reinforced. I don?t think a collective > voice can be effective for many topics, such as history--and neither can a partisan mob. Collectives have a power to distort history in a way > that damages minority viewpoints and calcifies the art of interpretation. Only the quirkiness of considered individual expression can cut through > the nonsense of mob--and that is the reason intellectual activity is important. I think we were always partisan, just like we were always stupid. It's just that now, we can see *everyone*. So we can see the other partisan groups, and we can see the stupid stuff, in a way that was largely hidden before. I also think things are actually improving. For example, we've always had conspiracy theories, but now we can see these things, and laugh and prod at them. Think about how scientology has declined as the laser light of the networked people has been beamed into it. > On another level, when someone does try to be expressive in a collective, Web 2.0 context, she must prioritize standing out from the crowd. To do > anything else is to be invisible. Therefore, people become artificially caustic, flattering, or otherwise manipulative. Compete for attention. That's the essence of the modern world. You used to be able to buy your way to success (eg: big content industries could vertically lock up the market, or at least work together cartel-wise to keep out others). Now you increasingly cannot. So, you need to work at being more interesting, in order to garner interest. But then that paragraph doesn't make sense. To be "expressive" you must prioritise "standing out from the crowd"? Non sequitur. To be expressive, you express. That's really unrelated to audience. To get attention, you might need to concentrate on standing out from the crowd, but that's unrelated to being expressive. The real lament here is about the difficulty of getting attention. Yeah, it's tough, suck it up. > Web 2.0 adherents might respond to these objections by claiming that I have confused individual expression with intellectual achievement. > This is where we find our greatest point of disagreement. I am amazed by the power of the collective to enthrall people to the point of > blindness. Collectivists adore a computer operating system called LINUX, for instance, but it is really only one example of a descendant of > a 1970s technology called UNIX. If it weren?t produced by a collective, there would be nothing remarkable about it at all. Fuck me sideways, this old chestnut. Linux is far too big an enterprise to easily generalize about, but it was never about being an intellectual achievement, or being remarkable. It was a response to the frustration that such banal infrastructure as operating systems required paying rent to commercial interests, and were closed, so people couldn't modify it, fix it, see the internal working and better optimise their work to it, etc etc etc. This was such a painful problem that technical people rebuilt the whole thing from the ground up to be free for everyone forever. That people did this is testament to how crappy the situation was before. Linux massively supports individual freedom. You can go get a copy, and do whatever you want with it, without reference to anyone. Don't like DRM? You don't have to have it. Don't like govt agency backdoors? In principle you can be sure they are not there (and I think so in practice, but I'm assuming enough other eyeballs, and could be wrong). Don't want to be held over a barrel by corporate interests? You don't have to. Want to adapt it to run on your quirky piece of hardware? You can. Want to do your own bizarre shit? You can. Constrast that with any of the closed OSs. Open source does provide plenty that's innovative, but mostly it's not been about that, it's been about freedom, the pure icy cold frosty chocolate libertarian kind. > Meanwhile, the truly remarkable designs that couldn?t have existed 30 years ago, like the iPhone, all come out of "closed" shops where individuals create > something and polish it before it is released to the public. Collectivists confuse ideology with achievement. iPhones are lovely, but one of the worst examples of closed shop thinking around. If open source is the bazaar, then the iPhone is a shiny shopping mall. Stuff like that is anti-human; it comes out of a place where there are no people, just consumers. I do think they've been a nice demonstration of what you can do with a bunch of new technologies (accelerometers, cheap 3G, cheap touch screens, cheap good quality cameras), but think of what they can't do. For example, what justification can their possibly be for not having phones transparently use WiFi for phone calls when in range, and 3G only when there is no other option? The hardware can do it, people want it. It's BS. People often (often!) say to me that Google will be the next Microsoft, and turn into bastards. Maybe. I'm far more worried about Apple, which has always been a closed shop, hostile to 3rd parties and into fleecing the consumer; Microsoft has always been a better choice in terms of freedom. > Question: Why has the idea that "the content wants to be free" (and the unrelenting embrace of the concept) been such a setback? What dangers > do you see this leading to? > Jaron Lanier: The original turn of phrase was "Information wants to be free." And the problem with that is that it anthropomorphizes information. > Information doesn?t deserve to be free. It is an abstract tool; a useful fantasy, a nothing. It is nonexistent until and unless a person experiences it > in a useful way. What we have done in the last decade is give information more rights than are given to people. If you express yourself on the > internet, what you say will be copied, mashed up, anonymized, analyzed, and turned into bricks in someone else?s fortress to support an > advertising scheme. However, the information, the abstraction, that represents you is protected within that fortress and is absolutely sacrosanct, > the new holy of holies. You never see it and are not allowed to touch it. This is exactly the wrong set of values. (more running and shouting) Information enriches us. It is pushing strongly in the direction of free not because some ideologues want it to, but because a billion people, empowered by a digital information network joining them all together, want it to be so. Wikipedia exists not because some ideologues want it, but because it's stupendously useful to a billion people, and some of them work to keep it happening. The collection of all of the world's music exists on Youtube not because some ideologues want it (in fact, there don't seem to be any focussed groups who do, and some that definitely don't), but because it's an incredible wealth that a billion people really want. I'm really hammering the keys now, pissed off. This massive collection of searchable, quality information on the net (including the brilliant ways the web 2.0 technologies have found to rise the cream to the top), represents an astounding increase in the absolute wealth of humanity, absolutely flabbergasting. Who has not had their daily lives changed by being able to look up almost any fact at a moment's notice, hear any music, get advice on any topic? Think of the way we used to hoard computer and programming manuals, encyclopedias, dusty books on arcane subjects, paltry collections of LPs and CDs. If we define wealth as access to and power over stuff, is there any across the board increase in absolute wealth that compares to what we've had in the last 10 years, in all of human history? What you say will be copied: this is a platform for perfectly copying digital information. That it will be mashed up: good god, it's a genetic algorithm at work. Information is being worked on by it. People do the mutating and recombining (and sometimes inject new pieces supposedly from whole cloth). The fitness function is how well it competes for attention amongst people (largely, on how good/crap it is). If mashups are crappy, they'll disappear into the morass of crap on the internet. If they are better, they then have better claim to attention than the original work. Your ego is your problem. I've seen a lot of people making similar complaints to Lanier in the last year or so. Bono's recent suggestion that we use draconian Chinese-style monitoring and control online to prop up his CD revenues comes to mind. They all seem to share some suspicious similarities: they are relatively wealthy, and at least part of their income is passive income from IP. I think for wealthy people, this increase in absolute wealth isn't a big deal, because it's absolute; relative to what they already had, it's a rounding error. If Bono wanted access to all the world's music, he could just get a little man to go buy it all for him. Wealthy people can maintain massive private libraries. Many of the areas where we've had improvement are around things that didn't cost much, but randomly accessing all of it (all the scientific journals, or all the books, or all the CDs) was cost prohibitive, unless you were wealthy. And I guess of course, if you make a lot of your money from opening royalty checks, well, you don't want that to stop. That income from IP (which requires people respecting your personal brand) is a lot more important to you than this absolute increase for everyone, which you can barely notice. > The idea that information is alive in its own right is a metaphysical claim made by people who hope to become immortal by being uploaded > into a computer someday. It is part of what should be understood as a new religion. That might sound like an extreme claim, but go visit > any computer science lab and you?ll find books about "the Singularity," which is the supposed future event when the blessed uploading is > to take place. A weird cult in the world of technology has done damage to culture at large. Well there you go. Jaron Lanier hates us in particular. "a new religion". Wanker. Show me this damage. Where are we poorer in information terms today than we ever were. Where is our culture(s) weaker, more degenerate? > Question: In You Are Not a Gadget, you argue that idea that the collective is smarter than the individual is wrong. Why is this? > Jaron Lanier: There are some cases where a group of people can do a better job of solving certain kinds of problems than individuals. One > example is setting a price in a marketplace. Another example is an election process to choose a politician. All such examples involve > what can be called optimization, where the concerns of many individuals are reconciled. There are other cases that involve creativity and > imagination. A crowd process generally fails in these cases. The phrase "Design by Committee" is treated as derogatory for good reason. > That is why a collective of programmers can copy UNIX but cannot invent the iPhone. A collective of volunteer programmers will not invent the iPhone because it is an essentially corporate beast. It is shiny and lowest common denominator. It emphasises features that can make money (Make it rich in the App Store!), and hides those that wont (eg: voip over wifi). It is non-hackable in any useful way. It is not only not user serviceable, but barely serviceable even by Apple. You can't strip the apple software out and install your own thing on it. It's a symbol of the corporate desire for the perfect consumer, who knows nothing, slavers over the shiny thing, swallows the mass marketed message. A wallet with legs. Volunteer programmers primarily create things that they themselves want and will use. Linux is derided as hard to use on the desktop; that's because it's made by people who don't want to cater to the lower common denominator. The whole attitude that you shouldn't have to know anything, you can just buy stuff, is a product of 20th century mass market consumerism, it's anti human and wrong. The point of the internet is that you can know anything, and can certainly inform yourself quickly enough to get along in most anything as needed. "Consumer" is a euphemism for "Mark", it's not something to aspire to, it's failure. If we have evolved to be anything noble, it is to be creative, constructive creatures. That's why a corporation can make an operating system but can never invent Linux. > In the book, I go into considerably more detail about the differences between the two types of problem solving. Creativity requires periodic, > temporary "encapsulation" as opposed to the kind of constant global openness suggested by the slogan "information wants to be free." > Biological cells have walls, academics employ temporary secrecy before they publish, and real authors with real voices might want to polish > a text before releasing it. In all these cases, encapsulation is what allows for the possibility of testing and feedback that enables a quest > for excellence. To be constantly diffused in a global mush is to embrace mundanity. No one is stopping you from doing this. In fact anyone doing anything thoughtful does this; create in private, polish, then release. The internet is largely individual voices, individual works, and that will not go away any time soon. That there are collective approaches being deployed doesn't attack the individual voice. Remember that every one of those collective approaches comes down to an individual or small group who had an idea, "what if we created an environment where people collaborate in such and such a way"? It's seems to me such a confused idea, that you can't privately create in the web 2.0 world. Of course you can. Any guide to getting an open source project off the ground will tell you first to create something that works, then try showing that to people and see if they're interested in helping. That you can control what you publish - well, that's a whole different thing. Short answer is "no" of course. The real thing that is difficult for creative people is the global competition. It's not creating in isolation that's hard, it's the same as it ever was. It's getting anyone to care once you want to bring your creation into the light. You can do your best work, and find not only can you not sell it, you can't give it away for free, because people have so much choice, and of such quality, that you have to work really hard to get people to even glance at your thing. But that's just old fashioned competition, ratcheted up to the scale of a billion+ potential competitors. If you care about doing quality work, you can still do that. To the extent that it's hard to be paid to do it, exactly when was the mythical glorious past where it was easy? If you care about getting your ego stroked, getting personal attention, well yeah, get in line, and be ready for that to be really hard. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From emlynoregan at gmail.com Tue Jan 12 05:07:41 2010 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 12 Jan 2010 15:37:41 +1030 Subject: [ExI] Ecosia - is this BS? Message-ID: <710b78fc1001112107k44441477xcd10662430825e81@mail.gmail.com> My bogometer is in the red. Please read and critique. http://www.businessgreen.com/business-green/news/2254326/bing-backs-world-greenest http://ecosia.org/ Should we translate this as "Microsoft greenwashes Bing, hapless WWF lends support"? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From max at maxmore.com Tue Jan 12 06:34:16 2010 From: max at maxmore.com (Max More) Date: Tue, 12 Jan 2010 00:34:16 -0600 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto Message-ID: <201001120634.o0C6YSv2005154@andromeda.ziaspace.com> Emlyn: Your long and angry response to the Lanier interview was stimulating. Sorry if my posting that distracted you from other tasks and drew you into a lengthy broadside... Although I disagreed with some of what you said in an earlier post on a related topic, I found that I agreed with almost everything you said in your current response. The one obvious point on which I disagree is this: >Why do they only read that? Because mostly we just need a good, >impartial summary and Wikipedia does that wonderfully. I agree that Wikipedia is generally excellent for non-controversial topics. But for controversial topics, both my personal experience and reading about others' experiences says that it does *not* do a wonderful job of being impartial. Cliques of editors can and do exert great control over content, making it very difficult for anyone outside their clique to make changes. They enjoy the appearance of broad input without the reality. (Different editorial policies and incentives might change this, but that's the way it's been for years now.) Apart from that, yes, Lanier seems both anti-transhumanist, pretty much anti-technological progress, and ultimately deeply conservative -- and not in any good sense. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From jonkc at bellsouth.net Tue Jan 12 06:40:38 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 01:40:38 -0500 Subject: [ExI] Psi. (no need to read this post you already know what it says ) In-Reply-To: <4B4910E2.7060300@satx.rr.com> References: <630266.35889.qm@web180203.mail.gq1.yahoo.com> <4B4910E2.7060300@satx.rr.com> Message-ID: On Jan 9, 2010, Damien Broderick wrote: > I also find some of Sheldrake's theories over the top or silly, but (1) that has nothing to do with his experiments OF COURSE THAT HAS SOMETHING TO DO WITH HIS EXPERIMENTS! If the man is known for being a fool then it's not unreasonable to think his experiments may have been foolishly performed, assuming he did any experiment at all and didn't just go straight to the typewriter. > if a scientist with a solid background speaks up for psi, it *means* he's a lunatic/gullible/lying etc, so you don't need to consider anything further that he says. And you think this ridiculous situation has continued unabated for centuries. Damien that is Bullshit, just Bullshit. > So why aren't their papers published in Nature and Science? Suppose stem cell papers were routinely sent for review to Jesuits at the God Hates Abortion Institute at Notre Dame Good God almighty! I would estimate that those two journals published about 60% of the Scientific discoveries made in the 20'th century, and you are comparing them to some religious rag. BULLSHIT! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Tue Jan 12 06:48:57 2010 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 12 Jan 2010 17:18:57 +1030 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto In-Reply-To: <201001120634.o0C6YSv2005154@andromeda.ziaspace.com> References: <201001120634.o0C6YSv2005154@andromeda.ziaspace.com> Message-ID: <710b78fc1001112248n172d4eeen675df63edfa0ed3b@mail.gmail.com> 2010/1/12 Max More : > Emlyn: Your long and angry response to the Lanier interview was stimulating. > Sorry if my posting that distracted you from other tasks and drew you into a > lengthy broadside... I was supposed to be cleaning the house, and instead spent many hours writing that. Thanks :-) Although I'd like to know where my house cleaning robots are; it's 2010 ffs. > Although I disagreed with some of what you said in an > earlier post on a related topic, I found that I agreed with almost > everything you said in your current response. > > The one obvious point on which I disagree is this: > >> Why do they only read that? Because mostly we just need a good, impartial >> summary and Wikipedia does that wonderfully. > > I agree that Wikipedia is generally excellent for non-controversial topics. > But for controversial topics, both my personal experience and reading about > others' experiences says that it does *not* do a wonderful job of being > impartial. Cliques of editors can and do exert great control over content, > making it very difficult for anyone outside their clique to make changes. > They enjoy the appearance of broad input without the reality. (Different > editorial policies and incentives might change this, but that's the way it's > been for years now.) Yes, that's true. With wikipedia, I personally get most value from non-controversial topics anyway; I wouldn't go there to understand contempory US politics, but I might go there to understand the meaning and history of a term like, say, utilitarianism. The nice thing though is that it is in no way a monopoly. Wikipedia is largely arrived at through google searches, and so is the rest of the web, so if you really disagree with it, you can post endless rebuttals of its articles, as much as your heart desires. I don't think it gets any special treatment as far as search rankings go. > > Apart from that, yes, Lanier seems both anti-transhumanist, pretty much > anti-technological progress, and ultimately deeply conservative -- and not > in any good sense. > > Max The few things I'd read from Lanier previously, I'd quite liked. I'm disappointed. I *think* he's a bit of a lefty, politically, but I'm not sure. If there's any clue that networked humanity is something new under the sun, politically, socially and economically, it is in the fact that the left and right hate it with equal vehemence. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From jonkc at bellsouth.net Tue Jan 12 07:07:13 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 02:07:13 -0500 Subject: [ExI] Avatar: misanthropy in three dimensions. In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: <43D0DF0D-1A46-4FF0-BC57-B5ED1A87C0A8@bellsouth.net> On Jan 9, 2010, at 1:22 PM, Max More wrote: > > Comments from anyone who has seen the movie? (I haven't yet.) Very good movie. Of course corporations and technology are evil and tree huggers know no vice, but that is mandatory for any movie set in the future so that didn't bother me; and evil or not the technology displayed is amazing, parts of it are quite beautiful. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Tue Jan 12 07:58:55 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 11 Jan 2010 23:58:55 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <591989.12968.qm@web113609.mail.gq1.yahoo.com> Message-ID: <4B4C2BBF.3070307@rawbw.com> Stathis writes > A text or code has no life of its own. It's of trivial interest that > multiple meanings could be attached to it: the important thing is to > work out the originally intended meaning. My favorite chapter of G?del, Escher, Bach is "The Location of Meaning". Hofstadter points out, essentially, that there are two kinds of meaning: conventional and isomorphic (I'm not sure after all these years whether the terminology is mine or his). You speak here of conventional meaning---meaning which operates by convention. Our convention for "z-e-b-r-a" is the large striped African mammal, though obviously those letters could be assigned to something else. Isomorphic meaning, however, is not at all arbitrary. The depth and jiggles in the grooves of a vinyl playing record have, in some cases, an objective isomorphism to the first movement of Beethovan's fifth symphony. That's their undeniable meaning, no two ways about it. > With computations, however, the situation may be different. > If it is an inputless program the meaning ascribed to it by > the original programmer has no magical potency to affect it; > any other meaning that could possibly be mapped to it, > including a meaning that changes from moment to moment, > is just as good. I believe I disagree here. If the computation is isomorphic to some other ultimate entity, then that's its meaning. We need only worry about the fidelity. To use the often heard rainstorm analogy, an exactly detailed computation of that rainstorm may not make anyone wet, but if there is anything to how the rainstorm feels, then we claim that the program feels the same way. > This means that any physical activity which could, under some > mapping, be seen as implementing a computation is implementing that > computation. Like interpreting a text according to an arbitrary > encoding this is a trivial observation in general, but it becomes > interesting when we consider computations that create their own > observers in virtual environments. Well, I'll nit-pick the first sentence here: I think it generally false that "*any* physical activity which could, under some mapping, be seen as implementing a computation is implementing that computation". We must not, after all, put "too much work" into finding such a mapping. For, if we do, then the Theory of Dust becomes acceptable, and it no longer matters what you or I do in anything, because the patterns of all outcomes are already out there between the stars. Instead, only mappings that are evident, i.e. prima facie or manifest, can be accepted. In fact, going back to Ben's example, a decipherment of Linear A will only be said to succeed when there is relatively little "stress" to such a mapping, i.e., the mapping becomes plain and patently manifest. Anyone who, on the other hand, puts forth a "decipherment" of Linear A that seems at all forced will find that no one will have any interest in it. Arbitrary mappings yield nothing and reveal nothing. Lee P.S. I second the motion that a medal should be struck in your honor, to applaud your perseverance with troublesome types like Gordon and me though thick or thin, without showing the slightest exasperation. :) From bbenzai at yahoo.com Tue Jan 12 09:10:24 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 12 Jan 2010 01:10:24 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <669329.67544.qm@web113606.mail.gq1.yahoo.com> Stathis Papaioannou wrote: > Of course, there > remains ... the possibility, > however unlikely, that the brain is not computable. I'm at a loss to understand this. On the face of it, it seems to be a claim that brains do not and cannot exist, but that can't be what you mean. Everything that exists has been 'computed' Everything is made of fundamental units that have been combined according to a set of rules. When we talk about making simulations we are just talking about moving this process to a different kind of fundamental unit, and discovering then applying the relevant set of rules. Thus we create models of things and processes, re-creating them on a different level of reality. If any aspect of a thing or process is not captured in the model, it means the model is not fine-grained enough, not extensive enough, or uses the wrong rules. All these things are fixable, at least in principle. So what does it mean to say that something is 'not computable', if not that it's impossible? Ben Zaiboc From stefano.vaj at gmail.com Tue Jan 12 10:39:56 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 11:39:56 +0100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> Message-ID: <580930c21001120239u4435de5cj101b3063c6ea683c@mail.gmail.com> 2010/1/11 ddraig : > 2010/1/11 Stefano Vaj : >> Neither have I, and I am quite impatient to get it on 3d blu-ray. > > You will watch it in 3d at home? How? No big deal. Some kind of 3d blu-ray "standard" is in the working, but several titles are already out, based on the usual tech of polarised glasses (which are included in the disc box). See, e.g., Coraline or Journey To The Center Of The Earth. At least for My Bloody Valentine 3d the disc even permitted to regulate the depth of the images with your remote control from the menu, as you do when you increase or decrease the audio volume... Sadly enough, I have never had any stereoscopic vision, so I miss all what the excitement is about... ;-) -- Stefano Vaj From stefano.vaj at gmail.com Tue Jan 12 11:45:48 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 12:45:48 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <669329.67544.qm@web113606.mail.gq1.yahoo.com> References: <669329.67544.qm@web113606.mail.gq1.yahoo.com> Message-ID: <580930c21001120345i1a9eaf2dmef8d7f334aaa39e8@mail.gmail.com> 2010/1/12 Ben Zaiboc : > I'm at a loss to understand this. ?On the face of it, it seems to be a claim that brains do not and cannot exist, but that can't be what you mean. > > Everything that exists has been 'computed' ?Everything is made of fundamental units that have been combined according to a set of rules. > > When we talk about making simulations we are just talking about moving this process to a different kind of fundamental unit, and discovering then applying the relevant set of rules. ?Thus we create models of things and processes, re-creating them on a different level of reality. OTOH, "computability" could be intended in the (Wolframian?) sense that there are no shortcuts. A given problem is uncomputable if it is not "reducible", so you have to run the process and see where it leads. This however tells us nothing about the fact that a functionally equivalent process can be implemented on a different platform. For instance, you can run a cellular automaton with pencil and paper or with a PC. -- Stefano Vaj From stefano.vaj at gmail.com Tue Jan 12 11:59:54 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 12:59:54 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21001120359y9bc18c5oac02e87449cbb72d@mail.gmail.com> 2010/1/9 Gordon Swobe : > Human operators ascribe meanings to the symbols their computers manipulate. How would that be the case? Ascribing meaning to a symbol means to associate something to something else. As, in, e.g., ax^2 + bx + c = y when I ascribe to x the meaning of "5". What would be so special in what humans would be doing in this respect? -- Stefano Vaj From stathisp at gmail.com Tue Jan 12 12:07:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jan 2010 23:07:51 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <4B4C2BBF.3070307@rawbw.com> References: <591989.12968.qm@web113609.mail.gq1.yahoo.com> <4B4C2BBF.3070307@rawbw.com> Message-ID: 2010/1/12 Lee Corbin : > My favorite chapter of G?del, Escher, Bach is "The Location > of Meaning". Hofstadter points out, essentially, that there > are two kinds of meaning: conventional and isomorphic (I'm > not sure after all these years whether the terminology is > mine or his). > > You speak here of conventional meaning---meaning which > operates by convention. Our convention for "z-e-b-r-a" > is the large striped African mammal, though obviously > those letters could be assigned to something else. > > Isomorphic meaning, however, is not at all arbitrary. > The depth and jiggles in the grooves of a vinyl playing > record have, in some cases, an objective isomorphism > to the first movement of Beethovan's fifth symphony. > That's their undeniable meaning, no two ways about it. Vinyl records are interesting, because the relationship between the bumps in the grooves and the music they represent is not as straightforward as you might think. For reasons of sound quality, during the cutting of a record the high frequencies are boosted and the low frequencies attenuated, and during playback this must be undone by applying the exact inverse operation. This so-called RIAA equalisation is traditionally achieved by using a network of capacitors and resistors in the amplifier preamp stage. The interesting thing about this is that there is no way of figuring out what the equalisation curve is unless you are given that information. I'm not sure if equalisation was used for the Voyager golden record, but it would have made sense to record it with a flat frequency response, otherwise the aliens would only be able to hear a distorted version of what we sound like. But worse would have been sending a CD, compressed audio such as an MP3 file, or encrypted audio, in that order. That would have completely stumped the aliens, no matter how smart they were. This is because digital files have conventional meaning, not isomorphic meaning. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Jan 12 13:41:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 05:41:20 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: <580930c21001120359y9bc18c5oac02e87449cbb72d@mail.gmail.com> Message-ID: <240509.42222.qm@web36505.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stefano Vaj wrote: >> Human operators ascribe meanings to the symbols their >> computers manipulate. > > How would that be the case? Ascribing meaning to a symbol > means to associate something to something else. As, in, e.g., ax^2 + > bx + c = y when I ascribe to x the meaning of "5". What would be so > special in what humans would be doing in this respect? I consider my pocket calculator as an extension of my own mind. I use it as a tool. If I encounter a screw that I can't remove with my fingers, I grab my pocket screwdriver. If I encounter a math problem that I can't solve with my brain, I grab my pocket calculator. I don't believe either of these pocket tools of mine have understanding of anything whatsoever, but I might find it fun to pretend they do. -gts From stathisp at gmail.com Tue Jan 12 13:44:07 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 00:44:07 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <669329.67544.qm@web113606.mail.gq1.yahoo.com> References: <669329.67544.qm@web113606.mail.gq1.yahoo.com> Message-ID: 2010/1/12 Ben Zaiboc : > Stathis Papaioannou wrote: > >> Of course, there >> remains ... the possibility, >> however unlikely, that the brain is not computable. > > > I'm at a loss to understand this. ?On the face of it, it seems to be a claim that brains do not and cannot exist, but that can't be what you mean. > > Everything that exists has been 'computed' ?Everything is made of fundamental units that have been combined according to a set of rules. > > When we talk about making simulations we are just talking about moving this process to a different kind of fundamental unit, and discovering then applying the relevant set of rules. ?Thus we create models of things and processes, re-creating them on a different level of reality. > > If any aspect of a thing or process is not captured in the model, it means the model is not fine-grained enough, not extensive enough, or uses the wrong rules. ?All these things are fixable, at least in principle. > > So what does it mean to say that something is 'not computable', if not that it's impossible? Computable means computable by a Turing machine. Not all numbers and functions are computable, but it is not clear how this is relevant to physics. True randomness is not computable (except by a trick involving observers in branching virtual worlds) but there is no evidence that pseudo-randomness, which is computable, won't do as well. Real numbers are not computable, but even if it turns out that some physical parameters are continuous rather than discrete there is no reason to suppose that infinite precision arithmetic will be required to simulate the brain, since thermal motion effects would make precision beyond a certain number of decimal places useless. Finally, there may be new physics, such as a theory of quantum gravity, which is not computable. Roger Penrose thinks that this is the case, and that the brain utilises this exotic physics to do things that no Turing machine ever could, such as have certain mathematical insights. However, few believe that Penrose is right, and almost all agree that his main argument from Godel's theorem is wrong. On balance, it seems that the brain works using plain old fashioned chemistry, which no-one claims is not computable. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Jan 12 13:53:57 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 05:53:57 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <502327.74481.qm@web36508.mail.mud.yahoo.com> > My favorite chapter of G?del, Escher, Bach is "The > Location of Meaning". Hofstadter points out, essentially, that... At the end of the day, after all his intellectual musings and gyrations, Hofstadter still needs to demonstrate how he solves the symbol grounding problem without a mind in which symbols can find their grounding: Symbol Grounding http://en.wikipedia.org/wiki/Symbol_grounding -gts From stefano.vaj at gmail.com Tue Jan 12 13:56:19 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 14:56:19 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <240509.42222.qm@web36505.mail.mud.yahoo.com> References: <580930c21001120359y9bc18c5oac02e87449cbb72d@mail.gmail.com> <240509.42222.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21001120556l254fa233p4bf11c57fb083aa7@mail.gmail.com> 2010/1/12 Gordon Swobe : > --- On Tue, 1/12/10, Stefano Vaj wrote: > >>> Human operators ascribe meanings to the symbols their >>> computers manipulate. >> >> How would that be the case? Ascribing meaning to a symbol >> means to associate something to something else. As, in, e.g., ax^2 + >> bx + c = y when I ascribe to x the meaning of "5". What would be so >> special in what humans would be doing in this respect? > > I consider my pocket calculator as an extension of my own mind. I use it as a tool. If I encounter a screw that I can't remove with my fingers, I grab my pocket screwdriver. If I encounter a math problem that I can't solve with my brain, I grab my pocket calculator. But how would they associate symbols with values the way any universal computer is able to do? > I don't believe either of these pocket tools of mine have understanding of anything whatsoever, but I might find it fun to pretend they do. In any event, I assume one does not really have any idea of whether something has any understanding of anything, unless one has first a definition of what "understanding" would mean... And even if you had one, as long as the definition were making reference to some kind of "subjective consciousness" rather than to some phenomenon you would not know anyway. -- Stefano Vaj From stathisp at gmail.com Tue Jan 12 14:00:42 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 01:00:42 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <502327.74481.qm@web36508.mail.mud.yahoo.com> References: <502327.74481.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/13 Gordon Swobe : >> My favorite chapter of G?del, Escher, Bach is "The >> Location of Meaning". Hofstadter points out, essentially, that... > > At the end of the day, after all his intellectual musings and gyrations, Hofstadter still needs to demonstrate how he solves the symbol grounding problem without a mind in which symbols can find their grounding: > > Symbol Grounding > http://en.wikipedia.org/wiki/Symbol_grounding At the end of the next day you need to show how the mind solves the symbol grounding problem. I don't expect you to come up with the actual answer, just an indication of what an answer might look like would suffice. -- Stathis Papaioannou From jonkc at bellsouth.net Tue Jan 12 14:48:30 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 09:48:30 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <240509.42222.qm@web36505.mail.mud.yahoo.com> References: <240509.42222.qm@web36505.mail.mud.yahoo.com> Message-ID: <7ED69C97-1790-43EA-AED3-E26F5E6801B1@bellsouth.net> On Jan 12, 2010, Gordon Swobe wrote: > I don't believe either of these pocket tools of mine have understanding of anything whatsoever How can you tell? You think understanding is a completely useless property that doesn't change the behavior of something in the slightest. Why do you even care? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Jan 12 14:51:29 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 06:51:29 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: <580930c21001120556l254fa233p4bf11c57fb083aa7@mail.gmail.com> Message-ID: <12876.6100.qm@web36508.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stefano Vaj wrote: > In any event, I assume one does not really have any idea of > whether something has any understanding of anything, unless one has > first a definition of what "understanding" would mean... It seems then that you want to understand the meaning of "understanding". But that shows me that you already understand it. Someone here tried the other day to re-define understanding in such a way that brains do not really do this thing called "understanding" -- that they do something else instead that we only call understanding. I had trouble following his argument because it seemed to me that he wanted me to understand it, but I couldn't understand it according to the argument. :-) Sometimes you just have to hang your hat on things. I hang my hat on, among other things, the common sense notion that healthy people with developed brains can understand the meanings of words and symbols. It seems pretty obvious that we do it even if I can't tell you exactly how we do it. If I did not think so, and if you also did not think so, then we would not be communicating with symbols right now on ExI. -gts From gts_2000 at yahoo.com Tue Jan 12 15:18:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 07:18:45 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <7ED69C97-1790-43EA-AED3-E26F5E6801B1@bellsouth.net> Message-ID: <42347.9787.qm@web36503.mail.mud.yahoo.com> --- On Tue, 1/12/10, John Clark wrote: > You think understanding is a completely useless property that doesn't > change the behavior of something in the slightest. No, never said that. Humans have it, and if we want a s/h system to behave as if it has it then we must simulate it with syntactic rules in the software. That's what programmers do. -gts From stefano.vaj at gmail.com Tue Jan 12 15:36:40 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 16:36:40 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <12876.6100.qm@web36508.mail.mud.yahoo.com> References: <580930c21001120556l254fa233p4bf11c57fb083aa7@mail.gmail.com> <12876.6100.qm@web36508.mail.mud.yahoo.com> Message-ID: <580930c21001120736m1c67096dl33c91d7a6ec43146@mail.gmail.com> 2010/1/12 Gordon Swobe : > Sometimes you just have to hang your hat on things. I hang my hat on, among other things, the common sense notion that healthy people with developed brains can understand the meanings of words and symbols. It seems pretty obvious that we do it even if I can't tell you exactly how we do it. If I did not think so, and if you also did not think so, then we would not be communicating with symbols right now on ExI. Common sense merely indicates to me that we are inclined to project "subjective" states on other things. By definition such projections are "not even wrong", not saying anything about phenomena (i.e., "objects") other than our own psychology. And they are routinely extended to many non-animal, or even non-organic, objects and systems. Moreover, even such projections are quite conditional. For instance, common sense does not tell me that perfectly developed brains of adult fruitflies have a better understanding, in whatever sense of the word you may choose to adopt, "of the meaning of words and symbols" than my PC (and in any event does not tells me that some "intelligent" features are exhibited by any developed brain which could not be replicated by *any* universal computer). -- Stefano Vaj From gts_2000 at yahoo.com Tue Jan 12 15:45:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 07:45:21 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <249656.87266.qm@web36501.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stathis Papaioannou wrote: > At the end of the next day you need to show how the mind solves the > symbol grounding problem. We'll know the complete answer someday. For now we need only know and agree that the mind has this cognitive capacity to understand, say, Chinese symbols. Now we ask whether implementing a formal program can give the mind that capacity to ground symbols. If it can then we should be able to do an experiment in which a mind obtains that cognitive capacity from mentally running a program. But as it turns out, we do not obtain that capacity from mentally running such a program. So whatever the mind does to get that cognitive capacity, it doesn't obtain it from running a formal program. Now we know more about the mind than we did before, even if we don't yet know the complete answer. -gts From jonkc at bellsouth.net Tue Jan 12 15:55:18 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 10:55:18 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <42347.9787.qm@web36503.mail.mud.yahoo.com> References: <42347.9787.qm@web36503.mail.mud.yahoo.com> Message-ID: <65BF7102-265A-44CF-BE80-23701CCB07B5@bellsouth.net> On Jan 12, 2010, Gordon Swobe wrote: >> You think understanding is a completely useless property that doesn't >> change the behavior of something in the slightest. > > No, never said that. Humans have it, and if we want a s/h system to behave as if it has it then we must simulate it You have said that "simulated understanding" is not understanding, so I repeat you think genuine understanding is a completely useless property, unless that is you wish to change your position. I certainly would if I were you. However if you do change you should be aware of the consequences; if genuine understanding is not useless then Evolution can produce it and the Turing Test can detect it, if genuine understanding is useless then the Turing Test can't detect it and neither can Evolution. Do you understand? Of course making a distinction between genuine understanding and simulated understanding is pretty silly, but that's another issue. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Jan 12 15:59:17 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 10:59:17 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <669329.67544.qm@web113606.mail.gq1.yahoo.com> Message-ID: <25BA8FEB-96E8-4A53-8DAF-AAAAC47EAFC4@bellsouth.net> On Jan 12, 2010, Stathis Papaioannou wrote: > Real numbers are not computable Some are, most aren't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Tue Jan 12 16:10:59 2010 From: max at maxmore.com (Max More) Date: Tue, 12 Jan 2010 10:10:59 -0600 Subject: [ExI] =?iso-8859-1?q?=91Strongest_Man=2C=92_104=2C_dies_after_he?= =?iso-8859-1?q?=27s_hit_by_car?= Message-ID: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> This is sad. Mr. Rollino looked amazingly good at 103. http://www.msnbc.msn.com/id/34818457/ns/us_news-life/ Max From jonkc at bellsouth.net Tue Jan 12 16:20:15 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 11:20:15 -0500 Subject: [ExI] Meaningless Symbols In-Reply-To: <249656.87266.qm@web36501.mail.mud.yahoo.com> References: <249656.87266.qm@web36501.mail.mud.yahoo.com> Message-ID: <7CBC881D-1E02-454C-BE0D-F8C0E57A27A7@bellsouth.net> On Jan 12, 2010, Gordon Swobe wrote: > we ask whether implementing a formal program can give the mind that capacity to ground symbols. If it can then we should be able to do an experiment in which a mind obtains that cognitive capacity from mentally running a program. How on Earth is this experiment supposed to work? There is no way you can know what its cognition is, all you can do is observe what it does, and you say that tells you nothing regardless of how brilliant and charming it behaves. > But as it turns out, we do not obtain that capacity from mentally running such a program. How on Earth do you know that? > We'll know the complete answer someday. There is not a snowball's chance in hell. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jan 12 17:45:49 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 04:45:49 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <249656.87266.qm@web36501.mail.mud.yahoo.com> References: <249656.87266.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/13 Gordon Swobe : > --- On Tue, 1/12/10, Stathis Papaioannou wrote: > >> At the end of the next day you need to show how the mind solves the >> symbol grounding problem. > > We'll know the complete answer someday. For now we need only know and agree that the mind has this cognitive capacity to understand, say, Chinese symbols. Now we ask whether implementing a formal program can give the mind that capacity to ground symbols. If it can then we should be able to do an experiment in which a mind obtains that cognitive capacity from mentally running a program. But as it turns out, we do not obtain that capacity from mentally running such a program. So whatever the mind does to get that cognitive capacity, it doesn't obtain it from running a formal program. > > Now we know more about the mind than we did before, even if we don't yet know the complete answer. It's not much of an answer. I was hoping you might say something like, understanding is due to a special chemical reaction in the brain, and since computers usually aren't chemical, they don't have it even if they can simulate its behaviour. In all that you and Searle have said, the strongest statement you can make is that a computer that is programmed to behave like a brain will not *necessarily* have the consciousness of the brain. You have not excluded the *possibility* that it might be conscious. You have no proof that, for example, understanding requires carbon atoms and is impossible without them. Nor have you any proof that arranging silicon and copper atoms in particular configurations that can be interpreted as implementing a formal program will *prevent* understanding that might have occurred had the arrangement been otherwise. In contrast, I have presented an argument which shows that it is *impossible* to separate understanding from behaviour. We have been talking about computerised neurons but the case can be made more generally. If God makes miraculous neurons that behave just like normal neurons but lack understanding, then these neurons could be used to selectively remove any aspect of consciousness such as perception, emotion and understanding. However, because the miraculous neurons behave normally in their interactions with the other neurons, the subject will behave normally and will not notice that anything has changed. He will lose visual perception but he will be not only able to describe everything he sees, he will also honestly believe that that he sees normally. He won't even comment that things are looking a little blurry around the edges, since the part of his brain responsible for noticing, reflecting on and verbalising will behave exactly the same as if the miraculous neurons had not been installed. Now surely if there is *anything* that can be said about visual perception, it is that a conscious, rational person will at least notice that something a bit unusual has happened if he suddenly goes completely blind; or that he has lost the power to understand speech, or the ability to feel pain. But with these miraculous neurons, any aspect of your consciousness could be arbitrarily removed and you would never know it. The conclusion is that in fact you would have normal consciousness with the miraculous neurons. In other words, they're not miraculous at all: not even God can make neurons that behave normally but lack consciousness. It's a logical impossibility, and God can at best only do the physically impossible, not the logically impossible. -- Stathis Papaioannou From spike66 at att.net Tue Jan 12 18:27:19 2010 From: spike66 at att.net (spike) Date: Tue, 12 Jan 2010 10:27:19 -0800 Subject: [ExI] roids in baseball Message-ID: <2B6901F67B3C4D7AB8A273A9A3894B81@spike> One of the US baseball stars yesterday confessed to having been using steroids during the period in which he broke a bunch of long standing records. Unfortunately the American proletariat totally missed the valuable signal among the noise. We treat it as a big cheating scandal instead of realizing that this is a perfect opportunity to test the value of various medications, a terrifically valuable laboratory in so many ways. We have preserved absurdly detailed records on the performance of so many players, from over a century, most of which are from before artificial steroids existed, so we have a great control group. We could use the information to help others achieve higher athletic performance, or help the aged combat the cruelty of the years. Instead we toss the valuable information into the wastecan of shame. The shame is on us. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Jan 12 19:51:22 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 12 Jan 2010 14:51:22 -0500 Subject: [ExI] roids in baseball In-Reply-To: <2B6901F67B3C4D7AB8A273A9A3894B81@spike> References: <2B6901F67B3C4D7AB8A273A9A3894B81@spike> Message-ID: 2010/1/12 spike : > ... We have > preserved absurdly detailed records on the performance of so many players, > from over a century, most of which are from before artificial steroids > existed, so we have a great control group.? We could use the information to > help others achieve higher athletic performance, or help the aged combat the > cruelty of the years. Given the changes that occurred to equipment and rules over the years, I'm not sure how much you can determine by comparing stats over a hundred year span. Baseball fans can argue over Babe Ruth vs. Mark Maguire to the point that it would make our "artificial consciousness" discussion seem terse. -Dave From jonkc at bellsouth.net Tue Jan 12 20:35:44 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 15:35:44 -0500 Subject: [ExI] Meaningless Symbols In-Reply-To: <12876.6100.qm@web36508.mail.mud.yahoo.com> References: <12876.6100.qm@web36508.mail.mud.yahoo.com> Message-ID: <9A7F80C0-F652-4A85-82B9-E93B67C0919A@bellsouth.net> On Jan 12, 2010, Gordon Swobe wrote: > > It seems then that you want to understand the meaning of "understanding". But that shows me that you already understand it. I understand the definition of understanding, but I don't understand the definition of definition. > > Someone here tried the other day to re-define understanding in such a way that brains do not really do this thing called "understanding" -- that they do something else instead that we only call understanding. Homer didn't write the Iliad and the Odyssey, they were written by another blind poet from Smyrna in 850BC who just happened to have the same name. > Sometimes you just have to hang your hat on things. I hang my hat on, among other things, the common sense notion that healthy people with developed brains can understand the meanings of words and symbols. Because they act as if they do. > It seems pretty obvious that we do it even if I can't tell you exactly how we do it. Because they act as if they do. > If I did not think so[...] Then you'd think you were the only conscious being in the universe, and nobody can live like that. John K Clark > , and if you also did not think so, then we would not be communicating with symbols right now on ExI. > > -gts > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Jan 12 21:21:16 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 16:21:16 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <591989.12968.qm@web113609.mail.gq1.yahoo.com> <4B4C2BBF.3070307@rawbw.com> Message-ID: <8297DA02-72F3-4062-B67B-074463DCDD4F@bellsouth.net> On Jan 12, 2010, Stathis Papaioannou wrote: > worse would have been sending a CD, compressed audio such as an MP3 file, or encrypted > audio, in that order. That would have completely stumped the aliens, no matter how smart they were. I wonder if that is true. Perhaps for MP3 files, because at least to some extent it was designed to accommodate the idiosyncrasies and limitations of the human auditory system, but if the music was converted into a Zip file our alien friends might have a chance. If Zip compression were perfect the file would seem completely random and even a Jupiter Brain would be sunk, but Zip is not perfect, redundancy remains, so they might figure out it's a compressed file and guess it represents a vibration. Alien or no I'll bet they're familiar with vibration. As for decoding encrypted stuff, that depends on if Quantum Computers can really be made to work. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Tue Jan 12 23:38:05 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 12 Jan 2010 16:38:05 -0700 Subject: [ExI] I'm no fool In-Reply-To: <201001080843.o088hLc2019286@andromeda.ziaspace.com> References: <201001080843.o088hLc2019286@andromeda.ziaspace.com> Message-ID: <2d6187671001121538v51707e80w1db326eb708dabab@mail.gmail.com> Max More wrote: > With the current discussion about psi, and our continuing interest in > rational thinking... Recently, I heard a line in a South Park episode that I > found extremely funny and really quite deep, paradoxical, and illuminating: > > "I wasn't born again yesterday" > > (This was in South Park, season 7, "Christian Rock Hard" Max, you need to get busy writing those pop culture & philosophy books! I know you can do better than what is already out there. Or you could start by writing "Cryonics for Dummies," to be followed up by "Transhumanism for Dummies..." John : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Wed Jan 13 01:46:47 2010 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 13 Jan 2010 12:16:47 +1030 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: <4B4A2384.7030700@satx.rr.com> References: <4B4A2384.7030700@satx.rr.com> Message-ID: <710b78fc1001121746y360ea3afw20235a8f301f71f7@mail.gmail.com> I've responded to this below. Summary: I don't buy it. Also, just for fun, I've put my description of what I think subjective conscious experience is and does at the bottom of this email, and am hoping for feedback. 2010/1/11 Damien Broderick : > New Scientist: You won't find consciousness in the brain > > > > 7 January 2010 by Ray Tallis > > [Raymond Tallis wrote a wonderful deconstruction of deconstruction and > poststructuralism, NOT SAUSSURE] > > MOST neuroscientists, philosophers of the mind and science > journalists feel the time is near when we will be able to explain > the mystery of human consciousness in terms of the activity of the > brain. There is, however, a vocal minority of neurosceptics who > contest this orthodoxy. Among them are those who focus on claims > neuroscience makes about the preciseness of correlations between > indirectly observed neural activity and different mental functions, > states or experiences. > > This was well captured in a 2009 article in Perspectives on > Psychological Science by Harold Pashler from the University of > California, San Diego, and colleagues, that argued: "...these > correlations are higher than should be expected given the (evidently > limited) reliability of both fMRI and personality measures. The high > correlations are all the more puzzling because method sections > rarely contain much detail about how the correlations were > obtained." > > Believers will counter that this is irrelevant: as our means of > capturing and analysing neural activity become more powerful, so we > will be able to make more precise correlations between the quantity, > pattern and location of neural activity and aspects of > consciousness. > > This may well happen, but my argument is not about technical, > probably temporary, limitations. It is about the deep philosophical > confusion embedded in the assumption that if you can correlate > neural activity with consciousness, then you have demonstrated they > are one and the same thing, and that a physical science such as > neurophysiology is able to show what consciousness truly is. I don't think there really is such a confusion. I'm pretty sure that the people studying the structure of the brain, looking for correlates to consciousness, know about this; we are all subjectively conscious beings, after all. It's just that you have to start somewhere; the approach is to keep finding mechanism, keep narrowing things down, and hope that along the way better information and better understanding will yield insight on how to find subjective consciousness itself. Given that currently, regarding subjective first person conscious experience, we can barely even frame the questions we want to ask, digging in hard into areas that we can make sense of is a great approach, particularly given than the one and the other must be massively interrelated. > Many neurosceptics have argued that neural activity is nothing like > experience, and that the least one might expect if A and B are the > same is that they be indistinguishable from each other. Countering > that objection by claiming that, say, activity in the occipital > cortex and the sensation of light are two aspects of the same thing > does not hold up because the existence of "aspects" depends on the > prior existence of consciousness and cannot be used to explain the > relationship between neural activity and consciousness. Ok, this immediately stops making much sense. Tell me if this is what he is saying: the sensation of light, and activity in the occipital cortex are different things, but we might say the activity in the cortex represents the light. But this representation only makes sense in the context of something which can understand the representation, which is consciousness, which puts the cart before the horse? > This disposes of the famous claim by John Searle, Slusser Professor > of Philosophy at the University of California, Berkeley: that neural > activity and conscious experience stand in the same relationship as > molecules of H[2]O to water, with its properties of wetness, > coldness, shininess and so on. Is he talking here about mind being an epiphenomenon of the brain? Or is it something more mundane; water is made of H20 molecules, which in aggregate have these middleworld properties as described. > The analogy fails as the level at > which water can be seen as molecules, on the one hand, and as wet, > shiny, cold stuff on the other, are intended to correspond to > different "levels" at which we are conscious of it. Wait, does this make sense? Wasn't the preceding sentence using water as an analogy, not talking about how we were conscious of it? > But the > existence of levels of experience or of description presupposes > consciousness. Water does not intrinsically have these levels. This is surely playing fast and loose with language. At best I can understand this as saying that without conscious experience, the world is just a dance of atoms. There is nothing "wet" because you need a mind to experience "wet". Yet wetness is also operational; it is a loose high level description of how water (groups of H20 molecules) will interact with other substances (it might infuse porous ones, for instance), and there is no need for the conscious observer to be present, in theory, for that to still happen. It's a disingenuous bit of wordplay though, no? At many scales, simple things can group together and exhibit higher level group behaviours that aren't necessarily obvious from the basics of the elements, and aren't like the elements. One H20 molecule really has nothing about it of wetness or shinyness or coldness; no one would describe one molecule as wet. In aggregate, however, the grouped substance does. > We cannot therefore conclude that when we see what seem to be neural > correlates of consciousness that we are seeing consciousness itself. Sure, that's why they're called correlates. > While neural activity of a certain kind is a necessary condition for > every manifestation of consciousness, from the lightest sensation to > the most exquisitely constructed sense of self, it is neither a > sufficient condition of it, nor, still less, is it identical with > it. For the activity of individual neurons, I'll accept this. But for the whole system of neurons, it's not at all clear. The wordplay about water above doesn't in any way tell us about it. Groups of things really have properties that the individuals do not, in that they have higher level behaviours which aren't similar to the behaviour of their elements. An H2O molecule is not wet, but water is. Neurons are very unlikely to have subjective conscious (and their molecules and atoms even less so), but that doesn't tell us whether the system of neurons is. We *don't know* what subjective consciousness is, so we can't say. It is probably safe to say that the neural system is necessary for it, but suffient and or equivalent? It could be, it might not be. Occam's razor says to me that it is more likely that the neural system is sufficient for consciousness, because otherwise we are looking for some other mechanism, and there's no evidence of any. But anyway, his argument that a group of things can't have different properties to that of the individual things, it's wrong. > If it were identical, then we would be left with the insuperable > problem of explaining how intracranial nerve impulses, which are > material events, could "reach out" to extracranial objects in order > to be "of" or "about" them. wtf? Where is there reaching out? There is no necessity for anything to magically breach the skull. We get input, it goes into the neural system, it gets processed, it and processed versions of it get stored as memories and as modifications to our mental processes. There is a representation inside the brain. The mechanism of the brain can work (must work!) entirely in terms of these representations. That we then have subjective experience of some piece of this working, including feelings about the things represented (what are qualia but feeling about representations), is mysterious, but we have no reason to suppose it is anything other than a higher level property of our neural hardware (otherwise, what is it?). The idea of "reaching out" is ridiculous. If we were directly experiencing the outside world somehow, rather than experiencing reconstituted feelings about representations of things, our mind would have all the weird failures it has; we wouldn't be able to have experiences of the world different to other people's experiences. > Straightforward physical causation > explains how light from an object brings about events in the > occipital cortex. No such explanation is available as to how those > neural events are "about" the physical object. Biophysical science > explains how the light gets in but not how the gaze looks out. The gaze - is this a reference to Foucault? (reaches for his cudgel) No gaze looks out. If we ignore first person subjective experience for a moment, everything else about the brain makes sense in terms of information processing. A robot can be self aware, in that it would have in its memory a collection of representations of things in the world, one of which is itself. Its processing would include some places where it was primary, and others where it was just one more thing in the field of things. A sophisticated enough program should be able to do all the things that we do, even come up with the same kinds of thoughts and ideas; if we accept that all the mechanism of the mind is in the brain, then it must be in principle computable, we just don't know how to do all that stuff yet. But it doesn't follow that this robot would be subjectively conscious like we are. If this subjective consciousness is the "gaze", then surely it doesn't look out, but merely looks apon the internal representation of what's out there. > Many features of ordinary consciousness also resist neurological > explanation. Take the unity of consciousness. I can relate things I > experience at a given time (the pressure of the seat on my bottom, > the sound of traffic, my thoughts) to one another as elements of a > single moment. Researchers have attempted to explain this unity, > invoking quantum coherence (the cytoskeletal micro-tubules of Stuart > Hameroff at the University of Arizona, and Roger Penrose at the > University of Oxford), electromagnetic fields (Johnjoe McFadden, > University of Surrey), or rhythmic discharges in the brain (the late > Francis Crick). > > These fail because they assume that an objective unity or uniformity > of nerve impulses would be subjectively available, which, of course, > it won't be. Even less would this explain the unification of > entities that are, at the same time, experienced as distinct. My > sensory field is a many-layered whole that also maintains its > multiplicity. There is nothing in the convergence or coherence of > neural pathways that gives us this "merging without mushing", this > ability to see things as both whole and separate. Does this make any sense to anyone? If you think of the brain as at least in part an information processing organ, then it will have representations of its inputs and itself at many different levels simultaneously (the colours brown and green and also bark and also the tree and also the forest), grouped in useful ways, including temporal grouping. That he can relate the feeling of his arse to his thoughts is in no doubt, but how does this relate to some "unity of consciousness"? Why invoke special magic for something so mundane? > And there is an insuperable problem with a sense of past and future. > Take memory. It is typically seen as being "stored" as the effects > of experience which leave enduring changes in, for example, the > properties of synapses and consequently in circuitry in the nervous > system. Absolutely. > But when I "remember", I explicitly reach out of the present > to something that is explicitly past. wtf??? How? With magic powers? Is this guy insisting that we have direct experience of the physical world, including the physical world of the past?? All that is required here is that you have an internal representation of the past, tucked away in your brain somewhere. > A synapse, being a physical > structure, does not have anything other than its present state. Yes, just as computer memory only has its present state, there are no time machines. > It does not, as you and I do, reach temporally upstream from the > effects of experience to the experience that brought about the > effects. Fuck a duck! All this requires is representation of the past. If you accept that we have subjective conscious awareness of some part of the processing of our minds, and that we can't explain that, there is no reason to invoke extra unknowns to describe remembering the past. Clearly, we have a representation of the past encoded in our brains, which we use to reconstitute the past. We have encodings of what happened in the past, including representations (not too sophisticated, one might add) of how we felt. It is clear to me that as we recall the past in this way, as we imagine it, we then reconstitute new, current feelings (qualia) relating to it, as if it were happening now. The best evidence that this is the case, and that we don't *actually reach back into the past*, is that we get it wrong a lot, mostly wrong actually, if you are to believe the science. Our memories (our representations of the past) are incomplete, and we fill in the blanks when we load them back up with plausible stuff. Sometimes we fabricate memory entirely. If you to "explicitly reach out of the present to something that is explicitly past", into the real past, surely all our recollections would be perfect and in perfect agreement? > In other words, the sense of the past cannot exist in a > physical system. Information systems do this with boring consistency. They store records of what happened in the past, who did what, what pieces of paper were seen by whom, etc. Your email client has a record of its past. A diary is a record of the past. These are (parts of) a physical system. > This is consistent with the fact that the physics > of time does not allow for tenses: Einstein called the distinction > between past, present and future a "stubbornly persistent illusion". What? why is this relevant? > There are also problems with notions of the self, with the > initiation of action, and with free will. Some neurophilosophers > deal with these by denying their existence, but an account of > consciousness that cannot find a basis for voluntary activity or the > sense of self should conclude not that these things are unreal but > that neuroscience provides at the very least an incomplete > explanation of consciousness. The basis for voluntary activity is straightforward; a bit of your brain is responsible for taking in a lot of input, including recent sensory information, memory, decisions and hints from other bits of the brain, and deciding on a course of action. In that it decides, based on whatever algorithms it uses, it is voluntary. That we have the sense of self, of subjective consciousness, no one will dispute this is mysterious. That we feel like we make decisions freely, rather than as the result of an algorithm is not at all mysterious; we feel all kinds of misleading things. Our brains are weird as hell, and mostly you shouldn't trust your brain too far; I certainly wouldn't turn my back on mine. The big mystery to my mind is that we have subjective consciousness at all. It doesn't seem to do anything useful, that you couldn't do without it. And yet it certainly has a function, has physical presence, because we can talk about it, think about it. It can't be off in some other distinct non-physical realm, because it can affect our brains. I guess a delusion could also do that, but if its a delusion, it's one shared by us all, and hardly counts as such. > I believe there is a fundamental, but not obvious, reason why that > explanation will always remain incomplete - or unrealisable. This > concerns the disjunction between the objects of science and the > contents of consciousness. Science begins when we escape our > subjective, first-person experiences into objective measurement, and > reach towards a vantage point the philosopher Thomas Nagel called > "the view from nowhere". You think the table over there is large, I > may think it is small. We measure it and find that it is 0.66 metres > square. We now characterise the table in a way that is less beholden > to personal experience. > > Thus measurement takes us further from experience and the phenomena > of subjective consciousness to a realm where things are described in > abstract but quantitative terms. To do its work, physical science > has to discard "secondary qualities", such as colour, warmth or > cold, taste - in short, the basic contents of consciousness. For the > physicist then, light is not in itself bright or colourful, it is a > mixture of vibrations in an electromagnetic field of different > frequencies. The material world, far from being the noisy, > colourful, smelly place we live in, is colourless, silent, full of > odourless molecules, atoms, particles, whose nature and behaviour is > best described mathematically. In short, physical science is about > the marginalisation, or even the disappearance, of phenomenal > appearance/qualia, the redness of red wine or the smell of a smelly > dog. Yes > Consciousness, on the other hand, is all about phenomenal > appearances/qualia. As science moves from appearances/qualia and > toward quantities that do not themselves have the kinds of > manifestation that make up our experiences, an account of > consciousness in terms of nerve impulses must be a contradiction in > terms. There is nothing in physical science that can explain why a > physical object such as a brain should ascribe appearances/qualia to > material objects that do not intrinsically have them. > > Material objects require consciousness in order to "appear". Then > their "appearings" will depend on the viewpoint of the conscious > observer. This must not be taken to imply that there are no > constraints on the appearance of objects once they are objects of > consciousness. > > Our failure to explain consciousness in terms of neural activity > inside the brain inside the skull is not due to technical > limitations which can be overcome. It is due to the > self-contradictory nature of the task, of which the failure to > explain "aboutness", the unity and multiplicity of our awareness, > the explicit presence of the past, the initiation of actions, the > construction of self are just symptoms. We cannot explain > "appearings" using an objective approach that has set aside > appearings as unreal and which seeks a reality in mass/energy that > neither appears in itself nor has the means to make other items > appear. The brain, seen as a physical object, no more has a world of > things appearing to it than does any other physical object. > The brain is an information processing and control system powerhouse. It also has this associated subjective consciousness, which appears related to / to have access to only a very small part of the brain, given how unaware we are of our own internal workings. The way the author talks about subjective consciousness, he makes it sound like an indivisible whole, atomic. Yet our brains & minds are clearly anything but. The very fact that we are so ignorant of how our mind works shows that the parts which correlate directly with consciousness have direct access to very little of the rest of the brain. I think I've said enough about why I think this guy is wrong. How about I go out on a limb and say what I think about subjective consciousness? I can't say how it works, but I have some ideas on why it exists and what it's for. It seems to me that subjective consciousness is simply a module of the mind, which is for something very specific, and that is to feel things. Qualia like the "redness of red" and emotions like anger share the property of being felt; they are the same kind of thing. It's clear to me at least that this is a functional module, in that it takes information from other parts of the brain as input (for example, the currently imagined representation of the world, whether that is current or a reloaded past), produces feelings (how? No idea), then outputs that back to the other parts of the brain, affecting them in appropriate ways. The other parts of the brain do everything else; they create all our "ideas" (and then we get to feel that "aha" moment"), they make all our decisions (to which are added some feelings of volition), they do all the work. The feelings produced/existing in the subjective consciousness module are like side effects of all that, but they go back in a feedback loop to influence the future operation of the other parts. Why would you have something like this? What can this do that a non-subjectively conscious module couldn't? Why not just represent emotions (with descriptive tags, numerical levels, canned processing specific to each one), why actually *feel* them? To me that's as big a question as how. I can't explain that. What's interesting though is how the purpose of the mechanism of feeling seems to be to guide all the other areas, to steer them. eg: some bits of the brain determine that we are in a fight-or-flight situation. They decide "flight". They inform the feeling module (subjective consciousness) that we need to feel fear. The feeling module does that ("Fear!"), and informs appropriate other parts of the brain to modify their processing in terms appropriate to fear (affecting decision making, tagging our memories with "I was scared here", even affecting our raw input processing). So we feel scared and do scared things. Probably most importantly, we can break "not enough information" deadlocks in decision making with "well what would the fearful choice be" - that's motivation right there. It's a blunt instrument, which might be useful if you didn't have much else in terms of executive processes. It is really weird in our brains though, because we do, we have fantastic higher level processing that can do all kinds of abstract reasoning and complex planning and sophisticated decision making. Why do we also need the bludgeons of emotions like anger, restlessness, boredom, happiness? So we have roughly two systems doing similar things in very different ways, which you'd expect to fight. And thus the human condition :-) But where it would not be weird is in a creature without all this higher level processing stuff. Never mind how evolution came up with it in the first place (evolution is amazing that way) but given that it did, it would be a great platform for steering, motivating, guiding an unintelligent being. So what I'm getting at is, it's a relic from our deep evolutionary past. It's not higher cognitive functioning at all. Probably most creatures are subjectively conscious. They don't have language, they might not have much concept of the past or future, but they feel, just as we do (if in a less sophisticated way). They really have pleasure and pain and the redness of red. And suffering. We have a conceit that we (our subjectively conscious selves) are *really* our higher order cognitive processes, but I think that's wrong. We take pride in our ideas, especially the ones that come out of nowhere, but that should be a clue. They come out of "nowhere" and are simply revealed to the conscious us. "Nowhere" is the modern information processing bits of the brain, the neocortex, which does the heavily lifting and informs us of the result without the working. We claim our own decisions, but neuroscience, as well as simple old psychology, keeps showing us that decisions are made before we are aware of them, and that we simply rationalize volition where it doesn't exist. How do we make decisions? Rarely in a step-by-step derivational, rational way. More often they are "revealed" to us, they're "gut instinct". They come from some other part of the brain which simply informs "us" of the result. We think of the stream of internal dialogue, the voice in the mind, as truly "us", but where do all those thoughts come from? You can't derive them. It's like we are reading a tape with words on it, which comes from somewhere else; it's being sent in by another part of the brain that we don't have access to, again. We read them, the subjective-consciousness module adds feelings of ownership to them, and decorates them with emotional content, and the result feeds back out to the inaccessible parts of the brain, to influence the next round of thoughts on the tape. In short, I think that the vast majority of the brain is stuff that our "subjective" self can't access except indirectly through inputs and outputs. Most of the things that make us smart humans are actually out in this area, and are plain old information processing stuff, you could replace them with a chip, and as long as the interfaces were the same, you'd never know. I think the treasured conscious self is less like an AGI than like a tiny primitive animal, designed for fighting and fucking and fleeing and all that good stuff, which evolution has rudely uplifted by cobbling together a super brain and stapling it to the poor creature. I hope I'm right. If this is actually how we work, then the prospect of seriously hacking our brains is very good. You should be able to replace existing higher level modules with synthetic equivalents (or upgrades). You should be able to add new stuff, as long as it obeys the API (eg: add thoughts to the thought tape? take emotional input and modify accordingly?) Also, as to correlates of subjectively conscious experience in the mind, we should be looking for something that exists everywhere, not just in us. That might narrow it down a bit ;-) -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From max at maxmore.com Wed Jan 13 02:00:52 2010 From: max at maxmore.com (Max More) Date: Tue, 12 Jan 2010 20:00:52 -0600 Subject: [ExI] Max and Natasha live on China radio in 2 mins Message-ID: <201001130201.o0D215r5001157@andromeda.ziaspace.com> http://www.am880.net/today.asp From gts_2000 at yahoo.com Wed Jan 13 02:23:50 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 18:23:50 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <331623.31673.qm@web36501.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stathis Papaioannou wrote: >> Now we know more about the mind than we did before, > even if we don't yet know the complete answer. > > It's not much of an answer. I was hoping you might say > something like,understanding is due to a special chemical reaction in > the brain... Well, yes, clearly neurons and neurochemistry and other biological factors in the brain enable our understanding of symbols. Sorry I can't tell you exactly how the science works; neuroscience still has much work to do. But this conclusion seems inescapable. To deny it one must leave the sane world of philosophical monism and enter into the not-so-sane world of dualism in which mental phenomena exist in some ephemeral netherworld, or into the similarly not-so-sane world of idealism in which matter does not even exist. But of course I'm making some value judgments here; dualists and idealists have rights to express their opinions too. > In all that you and Searle have said, the strongest > statement you can make is that a computer that is programmed to > behave like a brain will not *necessarily* have the consciousness of > the brain. I can say this with extremely high confidence: semantics does not come from syntax, and software/hardware systems as they exist today merely run syntactical programs. For this reason s/h systems of today cannot have semantics, i.e., they cannot overcome the symbol grounding problem. Many philosophers have offered rebuttals to Searle's argument, but none of the reputable rebuttals deny the basic truth that the man in the room cannot understand symbols from manipulating them according to rules of syntax. It just can't happen. (And the truth is that it's even worse than it seems: not only does the semantics come from the human operators, but so too does the syntax. This means that even if computers could get semantics from syntax, we would not be able to say that computers derive semantics independent of their human operators. But that's another story...) > In contrast, I have presented an argument which shows that > it is *impossible* to separate understanding from behaviour. You and I both know that philosophical zombies do not defy any rules of logic. So I don't know what you mean by "impossible". In fact to my way of thinking your experiments do exactly that: they create semi-robots that act like they have intentionality but don't, or which have compromised intentionality. They create weak AI. More in the morning if I get a minute. -gts From msd001 at gmail.com Wed Jan 13 04:17:25 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 12 Jan 2010 23:17:25 -0500 Subject: [ExI] Ecosia - is this BS? In-Reply-To: <710b78fc1001112107k44441477xcd10662430825e81@mail.gmail.com> References: <710b78fc1001112107k44441477xcd10662430825e81@mail.gmail.com> Message-ID: <62c14241001122017u68b0b181i8d5180e406575aa2@mail.gmail.com> On Tue, Jan 12, 2010 at 12:07 AM, Emlyn wrote: > My bogometer is in the red. Please read and critique. > > http://www.businessgreen.com/business-green/news/2254326/bing-backs-world-greenest > http://ecosia.org/ > > Should we translate this as "Microsoft greenwashes Bing, hapless WWF > lends support"? "...we could save a rainforest area as big as Switzerland each year." To indicate how far that analogy misses the mark, my only thought upon reading it was, "There are no rainforests in Switzerland" Might as well buy-into the black pixel project for 'saving energy' http://www.treehugger.com/files/2009/06/black-pixel-is-it-possible-to-save-energy-one-pixel-at-a-time.php I think we'd save more energy (and reduce carbon footprint, etc.) if we gave corporations a small tax break for every employee that provably works from home to avoid the commute. They have little incentive to "allow" their employees to escape the cube-farm and remain at home. So instead, I drive 26 miles each direction to sit in front of a computer that I could have accessed remotely (or used VPN, etc.) Stupidly wasteful. :( From stathisp at gmail.com Wed Jan 13 04:35:58 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 15:35:58 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <331623.31673.qm@web36501.mail.mud.yahoo.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/13 Gordon Swobe : >> In all that you and Searle have said, the strongest >> statement you can make is that a computer that is programmed to >> behave ?like a brain will not *necessarily* have the consciousness of >> the brain. > > I can say this with extremely high confidence: semantics does not come from syntax, and software/hardware systems as they exist today merely run syntactical programs. For this reason s/h systems of today cannot have semantics, i.e., they cannot overcome the symbol grounding problem. I don't accept that semantics does not come from syntax because I don't see where else, logically, semantics could come from. However, if I accept it for the sake of argument, you have agreed in the past that running a program incidentally will not destroy semantics. So it is possible for you to consistently to hold that semantics does not come from syntax *and* that computers can have semantics, due to their substance or their processes, just as in the case of the brain. > Many philosophers have offered rebuttals to Searle's argument, but none of the reputable rebuttals deny the basic truth that the man in the room cannot understand symbols from manipulating them according to rules of syntax. It just can't happen. Yes, but the man in the room has an advantage over the neurons in the brain, because he at least understands that he is doing some sort of weird task, while the neurons understand nothing at all. You would have to conclude that if the CR does not understand Chinese, then a Chinese speaker's brain understands it even less. >> In contrast, I have presented an argument which shows that >> it is *impossible* to separate understanding from behaviour. > > You and I both know that philosophical zombies do not defy any rules of logic. So I don't know what you mean by "impossible". In fact to my way of thinking your experiments do exactly that: they create semi-robots that act like they have intentionality but don't, or which have compromised intentionality. They create weak AI. I think it is logically impossible to create weak AI neurons. If weak AI neurons were possible, then it would be possible to arbitrarily remove any aspect of your consciousness leaving you not only behaving as if nothing had changed but also unaware that anything had changed. This would seem to go against any coherent notion of consciousness: however mysterious and ineffable it may be, you would at least expect that if your consciousness changed, for example if you suddenly went blind or aphasic, that you would notice something a bit out of the ordinary had happened. If you think that imperceptible radical change in consciousness is not self-contradictory, then I suppose weak AI neurons are logically possible. But you would then have the problem of explaining how you know now that you have not gone blind or aphasic without realising it, and why you should care if you had such an affliction. -- Stathis Papaioannou From jonkc at bellsouth.net Wed Jan 13 04:57:47 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 23:57:47 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <331623.31673.qm@web36501.mail.mud.yahoo.com> Message-ID: <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> > Many philosophers have offered rebuttals to Searle's argument, but none of the reputable rebuttals deny the basic truth that the man in the room cannot understand symbols And no reputable philosopher can deny that the man is not important, the room is. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jan 13 05:08:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 16:08:47 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> Message-ID: 2010/1/13 John Clark : > Many philosophers have offered rebuttals to Searle's argument, but none of > the reputable rebuttals deny the basic truth that the man in the room cannot > understand symbols > > And no reputable philosopher can deny that the man is not important, the > room is. Searle's response is for the man to internalise the cards and rules so that the room is eliminated. He then says that the man is the whole system and still doesn't understand Chinese, therefore the system doesn't understand Chinese. But that just means that Searle does not understand the concept of a system. -- Stathis Papaioannou From jonkc at bellsouth.net Wed Jan 13 05:19:42 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 13 Jan 2010 00:19:42 -0500 Subject: [ExI] =?iso-8859-1?q?=91Strongest_Man=2C=92_104=2C_dies_after_he?= =?iso-8859-1?q?=27s_hit_by_car?= In-Reply-To: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> References: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> Message-ID: <852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net> On Jan 12, 2010, Max More wrote: > This is sad. Mr. Rollino looked amazingly good at 103. That is sad, still, if any death could be called a good death then that was it, as was the death of my great grandmother who died at the age of 98 when she was hit by a train (she was as deaf as a post by then but her mind was sharp) when she crossed the railroad tracks on her way to visit some friends at a old folks home. It made for a bit of a mess but that wasn't her problem, she didn't have to clean it up. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Jan 13 05:44:01 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 13 Jan 2010 00:44:01 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> Message-ID: <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> On Jan 13, 2010, Stathis Papaioannou wrote: > Searle's response is for the man to internalise the cards and rules so > that the room is eliminated. He then says that the man is the whole > system I know, and after the minnow devours the whale Searle thinks it's still reasonable to talk about "the minnow". I don't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 13 05:58:09 2010 From: spike66 at att.net (spike) Date: Tue, 12 Jan 2010 21:58:09 -0800 Subject: [ExI] 'Strongest Man,' 104, dies after he's hit by car In-Reply-To: <852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net> References: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> <852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net> Message-ID: <3EA74E5F2AD346C689D830340790E256@spike> John I have read some weird comments on ExI-chat. Understatement, I have *written* some weird comments on Exi-chat. But fer cryin out loud man, what, if anything, in the hellll were you thinking when you wrote this? spike _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Sent: Tuesday, January 12, 2010 9:20 PM To: ExI chat list Subject: Re: [ExI] 'Strongest Man,' 104, dies after he's hit by car On Jan 12, 2010, Max More wrote: This is sad. Mr. Rollino looked amazingly good at 103. That is sad, still, if any death could be called a good death then that was it, as was the death of my great grandmother who died at the age of 98 when she was hit by a train (she was as deaf as a post by then but her mind was sharp) when she crossed the railroad tracks on her way to visit some friends at a old folks home. It made for a bit of a mess but that wasn't her problem, she didn't have to clean it up. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Jan 13 06:53:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 13 Jan 2010 01:53:49 -0500 Subject: [ExI] 'Strongest Man,' 104, dies after he's hit by car In-Reply-To: <3EA74E5F2AD346C689D830340790E256@spike> References: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> <852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net> <3EA74E5F2AD346C689D830340790E256@spike> Message-ID: <4EA6E89A-249D-4BEF-8B00-E6B50F2865DF@bellsouth.net> On Jan 13, 2010, spike wrote: > what if anything, in the hellll were you thinking when you wrote this? What's the problem? Like it or not billions of human beings have experienced death, trillions if you don't get too picky on defining what a human being is, and some of those deaths were better than others. Of course if I were God I would make death physically impossible, and excruciating pain even more impossible. I applied for the job and I just don't understand why I didn't get it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 13 07:01:24 2010 From: spike66 at att.net (spike) Date: Tue, 12 Jan 2010 23:01:24 -0800 Subject: [ExI] 'Strongest Man,' 104, dies after he's hit by car In-Reply-To: <4EA6E89A-249D-4BEF-8B00-E6B50F2865DF@bellsouth.net> References: <201001121611.o0CGB94L018181@andromeda.ziaspace.com><852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net><3EA74E5F2AD346C689D830340790E256@spike> <4EA6E89A-249D-4BEF-8B00-E6B50F2865DF@bellsouth.net> Message-ID: ...On Behalf Of John Clark On Jan 13, 2010, spike wrote: what if anything, in the hellll were you thinking when you wrote this? What's the problem? ...I applied for the job and I just don't understand why I didn't get it. John K Clark Ja I just failed to see the humor. And I am one of those cats who seldom fails to see the humor. But I will get over it. Sorry to hear of your family's loss. spike From gts_2000 at yahoo.com Wed Jan 13 12:44:50 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 04:44:50 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <199136.9587.qm@web36503.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stathis Papaioannou wrote: > I don't accept that semantics does not come from syntax > because I don't see where else, logically, semantics could come from. > However, if I accept it for the sake of argument, you have agreed in > the past that running a program incidentally will not destroy > semantics. So it is possible for you to consistently to hold that > semantics does not come from syntax *and* that computers can have > semantics, due to their substance or their processes, just as in the > case of the brain. No, not if by "computer" you mean "software/hardware system". Although we might call the brain a type of computer, we cannot call it a computer of the s/h system type because the brain has semantics and s/h systems do not. Your p-neurons equal s/h systems, and in your thought experiments you network these s/h systems and then imagine that networked s/h systems have semantics. > Yes, but the man in the room has an advantage over the > neurons in the brain, because he at least understands that he is > doing some sort of weird task, while the neurons understand nothing at > all. You would have to conclude that if the CR does not understand > Chinese, then a Chinese speaker's brain understands it even less. I would only draw that conclusion if I did not accept that real chinese brains are not s/h systems. In other words, I think you miss the lesson of the experiment, which is that real brains/minds do something we don't yet fully understand. They ground symbols, something s/h systems cannot do. This leads to the next phase in the argument: that real brains have evolved a biological, non-digital means for grounding symbols. > I think it is logically impossible to create weak AI > neurons. If weak AI neurons were possible, then it would be > possible to arbitrarily remove any aspect of your consciousness > leaving you not only behaving as if nothing had changed but also > unaware that anything had changed. This would seem to go against any > coherent notion of consciousness: however mysterious and ineffable it > may be, you would at least expect that if your consciousness changed, > for example if you suddenly went blind or aphasic, that you would notice > something a bit out of the ordinary had happened. If you think that > imperceptible radical change in consciousness is not self-contradictory, > then I suppose weak AI neurons are logically possible. But you would > then have the problem of explaining how you know now that you have not > gone blind or aphasic without realising it, and why you should care if > you had such an affliction. If you replace the neurons associated with "realizing it" then the patient will not realize it. If you leave those neurons alone but replace the neurons in other important parts of the brain, the patient will become a basket case in need of more surgery, as we have discussed already. It seems to me that in your laboratory you create many kinds of strange Frankenstein monsters that think and do many absurd and self-contradictory things, depending on which neurons you replace, and that you then try to draw meaningful conclusions based on the disturbed thoughts and behaviors of the monsters that you have yourself created. In the final analysis, will a person whose brain consists entirely of p-neurons have strong AI? I think the answer is no, for the same reason that I think a network of ordinary computers does not. -gts From gts_2000 at yahoo.com Wed Jan 13 13:54:09 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 05:54:09 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <510795.43385.qm@web36503.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > Searle's response is for the man to internalise the cards > and rules so that the room is eliminated. He then says that the man is > the whole system and still doesn't understand Chinese, therefore the > system doesn't understand Chinese. Right. > But that just means that Searle > does not understand the concept of a system. The point is that the man now IS the system. He becomes the room that some detractors insisted understood the symbols even if the man inside did not. He now has everything the room had, yet neither he nor anything inside him understands. The entire CR thought experiment was just a parable to help people see the obvious: that syntax is not sufficient for semantics -- that mere knowledge of how to manipulate symbols is not sufficient for gleaning their meanings. But some people missed the point and attacked the parable. Any 7th grade English teacher will teach the same thing that Searle taught: that understanding of syntax does not in itself lead to understanding of word meanings. One cannot become conversant in any language without understanding both its syntax (grammar) and its semantics (vocabulary), and the two things are different. Software/hardware systems *seem* to get semantics only because the programmer got semantics in elementary school, and then learned in college how to simulate semantics with syntax in formal programs, and then only if the computer operator either doesn't understand this or pretends it isn't so. -gts From stathisp at gmail.com Wed Jan 13 13:55:17 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 00:55:17 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <199136.9587.qm@web36503.mail.mud.yahoo.com> References: <199136.9587.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/13 Gordon Swobe : > --- On Tue, 1/12/10, Stathis Papaioannou wrote: > >> I don't accept that semantics does not come from syntax >> because I don't see where else, logically, semantics could come from. >> However, if I accept it for the sake of argument, you have agreed in >> the past that running a program incidentally will not destroy >> semantics. So it is possible for you to consistently to hold that >> semantics does not come from syntax *and* that computers can have >> semantics, due to their substance or their processes, just as in the >> case of the brain. > > No, not if by "computer" you mean "software/hardware system". > > Although we might call the brain a type of computer, we cannot call it a computer of the s/h system type because the brain has semantics and s/h systems do not. > > Your p-neurons equal s/h systems, and in your thought experiments you network these s/h systems and then imagine that networked s/h systems have semantics. Running formal programs does not (you claim) produce semantics, but neither does it prevent semantics. Therefore, computers can have semantics by virtue of some quality other than running formal programs. >> Yes, but the man in the room has an advantage over the >> neurons in the brain, because he at least understands that he is >> doing some sort of weird task, while the neurons understand nothing at >> all. You would have to conclude that if the CR does not understand >> Chinese, then a Chinese speaker's brain understands it even less. > > I would only draw that conclusion if I did not accept that real chinese brains are not s/h systems. In other words, I think you miss the lesson of the experiment, which is that real brains/minds do something we don't yet fully understand. They ground symbols, something s/h systems cannot do. That misses the point of the CRA, which is to show that the man has no understanding of Chinese, therefore the system has no understanding of Chinese. The argument ought not assume from the start that the CR has no understanding of Chinese on account of it being a S/H system, since that is the point at issue. So with the brain: the neurons don't understand Chinese, therefore the brain doesn't understand Chinese. But the brain does understand Chinese; so the claim that if the components of a system don't have understanding then neither does the system is not valid. > This leads to the next phase in the argument: that real brains have evolved a biological, non-digital means for grounding symbols. > >> I think it is logically impossible to create weak AI >> neurons. If weak AI neurons were possible, then it would be >> possible to arbitrarily remove any aspect of your consciousness >> leaving you not only behaving as if nothing had changed but also >> unaware that anything had changed. This would seem to go against any >> coherent notion of consciousness: however mysterious and ineffable it >> may be, you would at least expect that if your consciousness changed, >> for example if you suddenly went blind or aphasic, that you would notice >> something a bit out of the ordinary had happened. If you think that >> imperceptible radical change in consciousness is not self-contradictory, >> then I suppose weak AI neurons are logically possible. But you would >> then have the problem of explaining how you know now that you have not >> gone blind or aphasic without realising it, and why you should care if >> you had such an affliction. > > If you replace the neurons associated with "realizing it" then the patient will not realize it. If you leave those neurons alone but replace the neurons in other important parts of the brain, the patient will become a basket case in need of more surgery, as we have discussed already. No, he won't become a basket case. If the patient's visual cortex is replaced and the rest of his brain is intact then (a) he will behave as if he has normal vision because his motor cortex receives the same signals as before, and (b) he will not notice that anything has changed about his vision, since if he did he would tell you and that would constitute a change in behaviour, as would going crazy. These two things are *logically* required if you accept that p-neurons of the type described are possible. There are several ways out of the conundrum: (1) p-neurons are impossible, because they won't behave like b-neurons (i.e. there is something uncomputable about the behaviour of neurons); (2) p-neurons are possible, but zombie p-neurons are impossible; (3) zombie p-neurons are possible and your consciousness will fade away without you noticing if they are installed in your head; (4) zombie p-neurons are possible and you will notice your consciousness fading away if they are installed in your head but you won't be able to do anything about it. That covers all the possibilities. I favour (2). Searle favours (4), though apparently without realising that it entails an implausible form of dualism (your thinking is done by something other than your brain which functions in lockstep with your behaviour until the p-neurons are installed). Your answer is that the patient will go mad, but that simply isn't possible, since by the terms of the experiment his brain is constrained to behave as sanely as it would have without any tampering. I suspect you're making this point because you can see the absurdity the thought experiment is designed to demonstrate but don't feel comfortable committing to any of the above four options to get out of it. -- Stathis Papaioannou From stathisp at gmail.com Wed Jan 13 14:22:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 01:22:08 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <510795.43385.qm@web36503.mail.mud.yahoo.com> References: <510795.43385.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/14 Gordon Swobe : > --- On Wed, 1/13/10, Stathis Papaioannou wrote: > >> Searle's response is for the man to internalise the cards >> and rules so that the room is eliminated. He then says that the man is >> the whole system and still doesn't understand Chinese, therefore the >> system doesn't understand Chinese. > > Right. > >> But that just means that Searle >> does not understand the concept of a system. > > The point is that the man now IS the system. He becomes the room that some detractors insisted understood the symbols even if the man inside did not. He now has everything the room had, yet neither he nor anything inside him understands. The man physically constitutes the whole system but that does not mean that understanding at a higher level does not supervene on his low level symbol processing. That is what neurons do: the individual neurons are stupid, and they remain stupid despite the fact that intelligence and consciousness supervenes on their individually stupid behaviour. Perhaps a variant of the CR where there are *two* men cooperating in the symbol processing might drive home the point. Neither of the men understands Chinese; do you now think it is now possible that the system understands Chinese? What if the two men are telepathically linked so that they form one mind: does the system suddenly lose its understanding of Chinese that it had when they were separate? The CRA is meant to demonstrate that syntax cannot produce semantics without assuming it beforehand. The two man CR is even more closely analogous to the brain, so if the argument is that the two man CR does not have understanding, then it is also an argument that the brain of a Chinese speaker lacks understanding. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Jan 13 14:33:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 06:33:36 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <631550.23799.qm@web36505.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > The man physically constitutes the whole system but that > does not mean that understanding at a higher level does not supervene > on his low level symbol processing. Understanding at what higher level? The man stands in the middle of a field, naked, processing Chinese symbols in his head according to the syntactic rules specified in the program. Show me who or what understands the symbols. -gts From stathisp at gmail.com Wed Jan 13 15:15:43 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 02:15:43 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <631550.23799.qm@web36505.mail.mud.yahoo.com> References: <631550.23799.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/14 Gordon Swobe : > --- On Wed, 1/13/10, Stathis Papaioannou wrote: > >> The man physically constitutes the whole system but that >> does not mean that understanding at a higher level does not supervene >> on his low level symbol processing. > > Understanding at what higher level? > > The man stands in the middle of a field, naked, processing Chinese symbols in his head according to the syntactic rules specified in the program. Show me who or what understands the symbols. Suppose neurons are smart enough to understand their individual job, such as that they have to fire when they see a certain concentration of neurotransmitter, but not smart enough to understand the big picture. These neurons are in a Chinese speaker's head, and the rest of the cells in his body are no smarter than the neurons. Show me who or what understands Chinese. -- Stathis Papaioannou From jonkc at bellsouth.net Wed Jan 13 17:33:10 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 13 Jan 2010 12:33:10 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <510795.43385.qm@web36503.mail.mud.yahoo.com> References: <510795.43385.qm@web36503.mail.mud.yahoo.com> Message-ID: <941FEB19-1E85-48B8-B05E-1A07D57F843F@bellsouth.net> On Jan 13, 2010, Gordon Swobe wrote: > The point is that the man now IS the system. He becomes the room that some detractors insisted understood the symbols even if the man inside did not. He now has everything the room had, yet neither he nor anything inside him understands. That is your error right there "nor anything inside him". After this huge, gigantic, ridiculously large transformation you continue to talk about "the man" as if nothing has happened and as if what's inside him is still just one thing. In your thought experiment you don't give us one shred of evidence that there is no understanding inside the man, you simply state it and then demand that we explain that fact. Well it's not a fact, it's just another of your decrees. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Wed Jan 13 17:14:43 2010 From: aware at awareresearch.com (Aware) Date: Wed, 13 Jan 2010 09:14:43 -0800 Subject: [ExI] Seeing Through New Lenses In-Reply-To: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> Message-ID: Forwarding an article with relevance to the Extropy list. It's ironic that in a forum dedicated to the topic of "extropy" , which might be interpreted as increasingly meaningful increasing change, there's such vocal support for hard rationality, simple truth, and seeing the world the way way it "really is." - Jef ---------------------- Edge Perspectives with John Hagel: Relationships and Dynamics - Seeing Through New Lenses Relationships and Dynamics - Seeing Through New Lenses Do we all look at the world in the same way? Hardly. We can each look at the same scene and focus our attention on something completely different.? Individual idiosyncrasies definitely play a role, but broader patterns of perception are at work as well. Are certain patterns of perception more or less helpful in these rapidly changing times?? Most definitely ? in fact, they may determine who succeeds and who fails. About five years ago, Richard Nisbett, a professor of psychology, wrote "The Geography of Thought." This fascinating book drew on extensive research pointing to fundamental cultural differences in how we see the world. Specifically, he contrasted an East Asian way of seeing the world with a more traditional Western way of seeing. While it would be difficult to summarize Nisbett?s rich analysis, I want to focus on a key distinction that he develops in his analysis of two cultural ways of perceiving our world.? He suggests that East Asians focus on relationships as the key dimension of the world around us while Westerners tend to focus more on isolated objects.? In other words, East Asians tend to adopt more holistic views of the world while Westerners are more oriented to reductionist views. This basic difference plays out in fascinating ways, including the greater attention by East Asian children to verbs while Western children tend to learn nouns faster. One very tangible illustration of this is a simple test reported by Nisbett. A developmental psychologist showed three pictures to children ? a cow, a chicken and some grass. He asked children from America which two of the pictures belonged together.? Most of them grouped the cow and chicken together because they were both objects in the same category of animals.? Chinese children on the other hand tended to group the cow and grass together because ?cows eat grass? ? they focused on the relationship between two objects rather than the objects themselves. [Which of these do YOU prefer? Which of these do you think is closer to The Truth? Why? - Jef] I found this intriguing in the context of our continuing work at the Center for the Edge on the Big Shift.? As I indicated in a previous posting, the Big Shift is a movement from a world where value creation depends on knowledge stocks to one where value resides in knowledge flows ? in other words, objects versus relationships. Our Western way of perceiving has been very consistent with a world of knowledge stocks and short-term transactions.? As we move into a world of knowledge flows, though, I suspect the East Asian focus on relationships may be a lot more helpful to orient us (no pun intended). Of course, this is not an either/or proposition.? Nisbett holds out hope that these perspectives might ultimately converge, citing some promising research evidence: ?So, I believe the twain shall meet by virtue of each moving in the direction of the other.? East and West may contribute to a blended world where social and cognitive aspects of both regions are represented but transformed ? like the individual ingredients in a stew that are recognizable but are altered as they alter the whole. It may not be too much to hope that this stew will contain the best of each culture.? But wait, there is more.? The distinction between perception of objects and relationships is just one dimension of difference.? In fact, the East Asian and Western modes of seeing share one common element: they view the world as largely static. As Nisbett points out, the Greek philosophers gave us the notion that ?the world is fundamentally static and unchanging.? East Asians tend to focus on oscillations and cycles which acknowledge change but contain it in relatively narrow fields ? the world is in flux but it does not head in fundamentally different directions over long periods of time. So, there is another dimension that differentiates perception ? and this is a point that Nisbett sadly does not explore or develop.? Some of us tend to view the world in static terms while others focus on the deep dynamics that lead to fundamental transformations over time. Many executives, especially in large firms, tend to adopt a static view of the world.? They want detailed snapshots of their environments to drive their decision-making.? When they go to distant countries and markets, they carefully? observe the state of play as it is today, but they rarely ask for ?videos? ? detailed analyses of the trajectories of change that have been playing out over years and are likely to shape future markets.? Even in the more contemporary world of social network analysis, this analysis often remains highly static ? elegant maps show the rich structures of these social networks as they exist today, but they rarely reveal the dynamics that evolve these networks over time. Why is this the case? Many factors contribute to this static view of the world.? Modern enterprise is built on the notion of scalable efficiency and scalable efficiency requires predictability. Predictions are much easier in stable or static worlds, so executives are predisposed to see the world in these terms.? Change can be highly unpredictable and can rapidly call into question the ability to predict demand for products or services.? Whether one sees in terms of objects or relationships, these are much easier to understand and analyze if they remain stable.? Contemporary economics is largely built around equilibrium models that are essential if the detailed econometric analytics are to work. Social networks are complex and messy as it is, without having to factor in even more complex dynamics that continually reshape these networks over time. We don?t even have a very robust set of categories to describe various trajectories that can play out over time. Let?s face it, life would be a lot simpler if everything just came to a halt and stayed the way it is right now. But, of course, it does not stand still.? Our world is constantly evolving in complex and unexpected ways. And there is evidence that it is evolving ever more rapidly, generating disruptions that send people and things careening in new and unanticipated directions. Product life cycles are compressing across many, if not most, industries.? The movement from products to services as key drivers of growth reinforces this trend, since services can often be updated far more frequently than products. With the growth of outsourcing, new competitors can enter and scale positions in global markets in ways that simply were not feasible in the past when capital intensive physical facilities needed to be built before products could be launched.? Edges of new innovation rise quickly and gather force to challenge entrenched positions in the core of our global economy. Black swans pop up with increasing frequency, seemingly out of nowhere and challenging some of our most basic assumptions about the world around us. Yet, we do not have very good lenses or analytic tools to bring these dynamics to the forefront.? They tend to operate behind the scenes, rarely seen until it is too late and the latest disruption is enveloping us.? Survival in this more rapidly changing world requires developing new modes of perception, ones that put structure in the background and focus attention on the deep dynamics that are re-shaping the structures around us. This is the other key message of the Big Shift work.? We are going through a profound long-term shift in the way our global business landscapes are evolving.? We get so caught up in short-term events that we lose sight of these long-term changes, much less understanding what is driving them or thinking about their implications for how we work and live.? As we have emphasized, we must learn to make sense of the changes unfolding around us before we can make progress.? Even more fundamentally, we must learn to see these changes, searching them out where they remain hidden or obscured and penetrating through the surface currents of change to focus on the deeper dynamics shaping these currents. What is required to do this?? Well, first we need to embrace change rather than dampen or suppress it.? Virginia Postrel wrote "The Future and Its Enemies" over a decade ago, a fascinating book that described a persistent and intensifying conflict between stasists, those who fear and resist change, and dynamists, those who welcome change as an opportunity to create even more value for more people.? Those who fear and resist change spend relatively little time understanding change ? all of their energy is focused on blocking it. By embracing change, we begin to see the opportunities it creates.? We are motivated to explore the contours of change in ways that moves us from focusing on what is to what could be. As we begin this migration, we will need new analytic tools to help us on our way.? Promising early toolkits can be found in diverse arenas. For example, the Santa Fe Institute is studying the evolution of? complex adaptive systems and increasing returns dynamics.? On another front, the revival of Austrian economics challenges equilibrium analysis and instead focuses on processes of change unleashed by distributed tacit knowledge, inspired by the early work of Friedrich Hayek. In yet another arena, work in the technology world seeks to understand the implications of continuing exponential improvement in the price/performance of digital technology as it breeches the boundaries of computing and invades such diverse arenas as biology, materials science and robotics. Stepping back from all of this, the challenge is great, especially for those of us in the West.? We must learn to shift attention from objects to relationships while at the same time moving from structure to dynamics as the key lens for perception.? We were not trained this way.? We generally have not operated in this way. All of our assumptions tell us that this is the wrong way. Yet, there are enormous opportunities for those who do make this shift.? Perhaps most importantly, those of us who remain wedded to the old way of seeing things will find ourselves increasingly stressed, blindsided and marginalized in a world that will continue to move on without us. From thespike at satx.rr.com Wed Jan 13 18:31:36 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 13 Jan 2010 12:31:36 -0600 Subject: [ExI] Seeing Through New Lenses In-Reply-To: References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> Message-ID: <4B4E1188.7050603@satx.rr.com> On 1/13/2010 11:14 AM, Aware wrote: > A developmental psychologist showed three pictures to > children ? a cow, a chicken and some grass. He asked children from > America which two of the pictures belonged together. Most of them > grouped the cow and chicken together because they were both objects in > the same category of animals. Chinese children on the other hand > tended to group the cow and grass together because ?cows eat grass? ? > they focused on the relationship between two objects rather than the > objects themselves. > > [Which of these do YOU prefer? Which of these do you think is closer > to The Truth? Why? - Jef] The cow and the chicken, because they are Friends. From spike66 at att.net Wed Jan 13 18:53:32 2010 From: spike66 at att.net (spike) Date: Wed, 13 Jan 2010 10:53:32 -0800 Subject: [ExI] Seeing Through New Lenses In-Reply-To: <4B4E1188.7050603@satx.rr.com> References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> <4B4E1188.7050603@satx.rr.com> Message-ID: <248C93189859442BBF93F46F962216C8@spike> ... > > > A developmental psychologist showed three pictures to children - a > > cow, a chicken and some grass. He asked children from America which > > two of the pictures belonged together. Most of them > grouped the cow > > and chicken together because they were both objects in the same > > category of animals. Chinese children on the other hand tended to > > group the cow and grass together because "cows eat grass" - they > > focused on the relationship between two objects rather than the > > objects themselves. > > > > [Which of these do YOU prefer? Which of these do you think > is closer > > to The Truth? Why? - Jef] > > The cow and the chicken, because they are Friends. Damien The chicken and the grass belong together, with with the cow being the odd lifeform out. Clearly neither the chicken nor the grass feed their offspring directly from glands on their bodies. spike From natasha at natasha.cc Wed Jan 13 19:22:55 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 13 Jan 2010 14:22:55 -0500 Subject: [ExI] Seeing Through New Lenses In-Reply-To: <248C93189859442BBF93F46F962216C8@spike> References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> <4B4E1188.7050603@satx.rr.com> <248C93189859442BBF93F46F962216C8@spike> Message-ID: <20100113142255.jvam7een28ws0wok@webmail.natasha.cc> LOL Quoting spike : > ... >> >> > A developmental psychologist showed three pictures to children - a >> > cow, a chicken and some grass. He asked children from America which >> > two of the pictures belonged together. Most of them >> grouped the cow >> > and chicken together because they were both objects in the same >> > category of animals. Chinese children on the other hand tended to >> > group the cow and grass together because "cows eat grass" - they >> > focused on the relationship between two objects rather than the >> > objects themselves. >> > >> > [Which of these do YOU prefer? Which of these do you think >> is closer >> > to The Truth? Why? - Jef] >> >> The cow and the chicken, because they are Friends. Damien > > > The chicken and the grass belong together, with with the cow being the odd > lifeform out. Clearly neither the chicken nor the grass feed their > offspring directly from glands on their bodies. > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From gts_2000 at yahoo.com Wed Jan 13 19:50:13 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 11:50:13 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <321375.66546.qm@web36501.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > Suppose neurons are smart enough to understand their > individual job, such as that they have to fire when they see a > certain concentration of neurotransmitter, but not smart enough to > understand the big picture. These neurons are in a Chinese speaker's > head, and the rest of the cells in his body are no smarter than the > neurons. Show me who or what understands Chinese. In that case the system understands Chinese. Evidently it learned it somewhere, and as far I know only human systems can do it. The question *here* concerns whether people or computers can learn or understand Chinese from following rules of syntax only, because formal programs have only rules of syntax. Again I ask you: The Englishman stands naked in a field. He represents the entire system. He and his neurons (trillions upon trillions upon trillions of them if you like) process Chinese symbols according to the *syntactic rules specified in a program* which he and his neurons have memorized. Show me who or what understands the meanings of the symbols. If you cannot then you agree with your 7th grade English teacher who knew that following the rules of grammar (syntax) is not the same as understanding the meanings of the words (semantics). That's why your teacher tested your grammar and vocabulary skills on different days of the week. *They're different subjects*. You used to know this. -gts From gts_2000 at yahoo.com Wed Jan 13 20:05:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 12:05:59 -0800 (PST) Subject: [ExI] Meaningless Symbols. Message-ID: <483027.77459.qm@web36506.mail.mud.yahoo.com> By the way I assume by "chinese speaker" in your example below that you refer to a native Chinese speaker, or to some person/system who understands Chinese as people actually understand it, i.e., by means other than running syntactic programs in their heads. -gts --- On Wed, 1/13/10, Gordon Swobe wrote: > From: Gordon Swobe > Subject: Re: [ExI] Meaningless Symbols. > To: "ExI chat list" > Date: Wednesday, January 13, 2010, 2:50 PM > --- On Wed, 1/13/10, Stathis > Papaioannou > wrote: > > > Suppose neurons are smart enough to understand their > > individual job, such as that they have to fire when > they see a > > certain concentration of neurotransmitter, but not > smart enough to > > understand the big picture. These neurons are in a > Chinese speaker's > > head, and the rest of the cells in his body are no > smarter than the > > neurons. Show me who or what understands Chinese. > > In that case the system understands Chinese. Evidently it > learned it somewhere, and as far I know only human systems > can do it. > > The question *here* concerns whether people or computers > can learn or understand Chinese from following rules of > syntax only, because formal programs have only rules of > syntax. > > Again I ask you: > > The Englishman stands naked in a field. He represents the > entire system. He and his neurons (trillions upon trillions > upon trillions of them if you like) process Chinese symbols > according to the *syntactic rules specified in a program* > which he and his neurons have memorized. Show me who or what > understands the meanings of the symbols. > > If you cannot then you agree with your 7th grade English > teacher who knew that following the rules of grammar > (syntax) is not the same as understanding the meanings of > the words (semantics). > > That's why your teacher tested your grammar and vocabulary > skills on different days of the week. *They're different > subjects*. You used to know this. > > -gts > > > > ? ? ? > From eric at m056832107.syzygy.com Wed Jan 13 20:35:37 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 13 Jan 2010 20:35:37 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <321375.66546.qm@web36501.mail.mud.yahoo.com> References: <321375.66546.qm@web36501.mail.mud.yahoo.com> Message-ID: <20100113203537.5.qmail@syzygy.com> Gordon asks: > >The Englishman stands naked in a field. He represents the entire > system. He and his neurons (trillions upon trillions upon trillions > of them if you like) process Chinese symbols according to the > *syntactic rules specified in a program* which he and his neurons > have memorized. Show me who or what understands the meanings of the > symbols. That's easy: the data state of the system is where the understanding is. Syntactic processing involves keeping machine state. In this case, that state might represent interconnections between neurons, and the strengths of those interconnections. Those connections and strengths change over time, and are data which can be syntactically manipulated to model those changes. The changes represent learning. The semantics is learned based on experience. The semantics is encoded in that data. If you take that same data representing a system which understands something and use it to drive another computational process based on a different substrate, the resulting system will still understand the same things. If neural interconnections in a human brain have achieved some understanding, we can (theoretically) extract that understanding and move it to another substrate, like computationally based neurons. Oh, and symbol grounding is learned based on interactions with external entities. Why is this such a mystery? -eric From stathisp at gmail.com Thu Jan 14 00:20:17 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 11:20:17 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <321375.66546.qm@web36501.mail.mud.yahoo.com> References: <321375.66546.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/14 Gordon Swobe : > --- On Wed, 1/13/10, Stathis Papaioannou wrote: > >> Suppose neurons are smart enough to understand their >> individual job, such as that they have to fire when they see a >> certain concentration of neurotransmitter, but not smart enough to >> understand the big picture. These neurons are in a Chinese speaker's >> head, and the rest of the cells in his body are no smarter than the >> neurons. Show me who or what understands Chinese. > > In that case the system understands Chinese. Evidently it learned it somewhere, and as far I know only human systems can do it. Hence the point: the system understands even though the parts of it don't. We already knew that was the case, so the CR does not add anything to the discussion. > The question *here* concerns whether people or computers can learn or understand Chinese from following rules of syntax only, because formal programs have only rules of syntax. Which the CRA does not help with. The man manipulates symbols without understanding them and so do the neurons. > Again I ask you: > > The Englishman stands naked in a field. He represents the entire system. He and his neurons (trillions upon trillions upon trillions of them if you like) process Chinese symbols according to the *syntactic rules specified in a program* which he and his neurons have memorized. Show me who or what understands the meanings of the symbols. > > If you cannot then you agree with your 7th grade English teacher who knew that following the rules of grammar (syntax) is not the same as understanding the meanings of the words (semantics). > > That's why your teacher tested your grammar and vocabulary skills on different days of the week. *They're different subjects*. You used to know this. In the first grade, the teacher made mouth noises and pointed to objects or pictures of objects. In later years it was more often relating one set of mouth noises to another set of mouth noises which had already been learned. -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 14 01:23:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 17:23:45 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <399013.3045.qm@web36505.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > Hence the point: the system understands even though the > parts of it don't. We already knew that was the case, so the CR > does not add anything to the discussion. Forget about the CR. Neither of us care if parts of the system understand anything. We want to know if the system as a whole knows Chinese from manipulating Chinese symbols according to rules of syntax. It cannot, because syntax only tells the system 'what' to put 'where' and 'when'. The system looks at the forms of things, not at the meanings of things. Here's the classic one-line program: print "Hello World" It takes the form The system does not understand or care about the semantic drivel you put in the string. It just follows the syntactic rule (and it doesn't care about that either, by the way) and prints the contents of the string. Do you think the system understands the string? Do you think that upon running this program, a little conscious entity inside your computer will greet you? Seriously, Stathis, what do you think? And by the way the most sophisticated program possible on a s/h system will differ in no philosophically important way from this one. -gts From eric at m056832107.syzygy.com Thu Jan 14 01:50:56 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 14 Jan 2010 01:50:56 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <399013.3045.qm@web36505.mail.mud.yahoo.com> References: <399013.3045.qm@web36505.mail.mud.yahoo.com> Message-ID: <20100114015056.5.qmail@syzygy.com> Gordon writes: > >Here's the classic one-line program: > >print "Hello World" > >It takes the form > > > >Do you think the system understands the string? Of course not. If there is any understanding here, it is of the word "print". The interpreter maps the word "print" into an external action, and as a result ends up making the string visible. The CPU/RAM, etc.. in the machine (the hardware) has no understanding of "print". That understanding (such as it is) is encoded in the software running on the computer. Your understanding of the word "print" is not inherent in your brain, just as it isn't in the CPU. When you were born, you had the capacity to learn English, including the word "print", but you could have been taught Chinese instead. That teaching changed the neural interconnections in your brain, changing the way it reacts to English words, just as the interpreter program changes the way the computer hardware reacts to programming constructs like "print". Your understanding is encoded in those interconnections. Understanding cannot be encoded in a single neuron, just as it cannot be encoded in a single transistor. It is the system of interconnections which learns to understand. That system of interconnections can be treated as data, and can be manipulated by programs using purely syntactic rules. -eric From stathisp at gmail.com Thu Jan 14 01:56:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 12:56:38 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <399013.3045.qm@web36505.mail.mud.yahoo.com> References: <399013.3045.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/14 Gordon Swobe : > --- On Wed, 1/13/10, Stathis Papaioannou wrote: > >> Hence the point: the system understands even though the >> parts of it don't. We already knew that was the case, so the CR >> does not add anything to the discussion. > > Forget about the CR. Neither of us care if parts of the system understand anything. We want to know if the system as a whole knows Chinese from manipulating Chinese symbols according to rules of syntax. > > It cannot, because syntax only tells the system 'what' to put 'where' and 'when'. The system looks at the forms of things, not at the meanings of things. > > Here's the classic one-line program: > > print "Hello World" > > It takes the form > > > > The system does not understand or care about the semantic drivel you put in the string. It just follows the syntactic rule (and it doesn't care about that either, by the way) and prints the contents of the string. > > Do you think the system understands the string? Do you think that upon running this program, a little conscious entity inside your computer will greet you? Seriously, Stathis, what do you think? No, because this program is less complex than even a single neuron. > And by the way the most sophisticated program possible on a s/h system will differ in no philosophically important way from this one. The problem is that you can't explain how humans get their understanding. It doesn't help to say that some physical activity happens in neurons which produces the understanding, not because you haven't given the details of the physical activity, but because you haven't explained how, in general terms, it is possible for the physical activity in a brain to pull off that trick but not the physical activity in a computer. Even if it's true that computers only do syntax and syntax can't produce meaning (it isn't, since logically there is nowhere else for meaning to come from) this does not mean that computers can't produce meaning. It would be like saying brains only do chemistry and chemistry can't produce meaning. In the course of the chemistry brains manipulate symbols and that's where the meaning comes from if you believe meaning can only come from symbol manipulation; and in the course of manipulating symbols computers are physically active and that's where the meaning comes from if you believe meaning can only come from physical activity. -- Stathis Papaioannou From max at maxmore.com Thu Jan 14 02:21:46 2010 From: max at maxmore.com (Max More) Date: Wed, 13 Jan 2010 20:21:46 -0600 Subject: [ExI] The Nature of Technology: What It Is and How It Evolves (Brian Arthur) Message-ID: <201001140222.o0E2M4Hv027890@andromeda.ziaspace.com> Many of the economically-inclined people here will be familiar with the previous work of economist W. Brian Arthur. I just read a disappointing interview with him? The Evolution of Technology by Art Kleiner strategy+business, January 4, 2010 http://www.strategy-business.com/article/00014?pg=all My review/commentary (written for executives) is here: http://www.manyworlds.com/exploreco.aspx?coid=CO1121015233189 http://www.manyworlds.com/exploreCO.aspx?coid=CO1121015233189 His new book is The Nature of Technology: What It Is and How It Evolves. Has anyone read it? Is it drastically better than this interview suggests? (My doubt is partly because I know Kleiner is a smart guy and a practiced interviewer, so the lack of real content is unlikely to be his fault.) Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From avantguardian2020 at yahoo.com Thu Jan 14 02:34:42 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 13 Jan 2010 18:34:42 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <399013.3045.qm@web36505.mail.mud.yahoo.com> Message-ID: <876885.6117.qm@web65609.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: gordon.swobe at yahoo.com; ExI chat list > Sent: Wed, January 13, 2010 5:56:38 PM > Subject: Re: [ExI] Meaningless Symbols. > > The problem is that you can't explain how humans get their > understanding. It doesn't help to say that some physical activity > happens in neurons which produces the understanding, not because you > haven't given the details of the physical activity, but because you > haven't explained how, in general terms, it is possible for the > physical activity in a brain to pull off that trick but not the > physical activity in a computer. Even if it's true that computers only > do syntax and syntax can't produce meaning (it isn't, since logically > there is nowhere else for meaning to come from) this does not mean > that computers can't produce meaning. *China room no important me?say* ?If you can understand that, then syntax is not all that?relevant to?human understanding at a fundamental level. A similarly scrambled?statement in any?scripting language?that I can think of would have caused the program to halt. Yet your brain takes it stride and understands. *This* is what I think is fascinating.?? > It would be like saying brains > only do chemistry and chemistry can't produce meaning. In the course > of the chemistry brains manipulate symbols and that's where the > meaning comes from if you believe meaning can only come from symbol > manipulation; and in the course of manipulating symbols computers are > physically active and that's where the meaning comes from if you > believe meaning can only come from physical activity. Brains, being part of the real world, do it all. Chemistry is merely a model that simplifies a small part of what?reality does so that we can discuss it and think about it. But there are things that the universe does for which?there are not yet?words, symbols, or concepts.?Imagine trying to explain "quantum erasure" to?Plato in ancient Greek and you will see what I am getting at.?? Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From emlynoregan at gmail.com Thu Jan 14 03:31:01 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 14 Jan 2010 14:01:01 +1030 Subject: [ExI] The Nature of Technology: What It Is and How It Evolves (Brian Arthur) In-Reply-To: <201001140222.o0E2M4Hv027890@andromeda.ziaspace.com> References: <201001140222.o0E2M4Hv027890@andromeda.ziaspace.com> Message-ID: <710b78fc1001131931kb38a2dcwc49cd41a02ec02c7@mail.gmail.com> 2010/1/14 Max More : > Many of the economically-inclined people here will be familiar with the > previous work of economist W. Brian Arthur. I just read a disappointing > interview with him? > > The Evolution of Technology > by Art Kleiner > strategy+business, January 4, 2010 > http://www.strategy-business.com/article/00014?pg=all > > My review/commentary (written for executives) is here: > http://www.manyworlds.com/exploreco.aspx?coid=CO1121015233189 > http://www.manyworlds.com/exploreCO.aspx?coid=CO1121015233189 > > His new book is The Nature of Technology: What It Is and How It Evolves. Has > anyone read it? Is it drastically better than this interview suggests? (My > doubt is partly because I know Kleiner is a smart guy and a practiced > interviewer, so the lack of real content is unlikely to be his fault.) > > Max I quite liked that interview; there's no depth, but then it's only short. But the fundamental point that the economy is an organizing system for technology is a good one. It reminds me of Kevin Kelly's talk "What Technology Wants" (I haven't read his book). Is there something particular you disagreed with in this interview? Or it's just lightweight? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From stathisp at gmail.com Thu Jan 14 03:59:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 14:59:38 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <876885.6117.qm@web65609.mail.ac4.yahoo.com> References: <399013.3045.qm@web36505.mail.mud.yahoo.com> <876885.6117.qm@web65609.mail.ac4.yahoo.com> Message-ID: 2010/1/14 The Avantguardian : > *China room no important me?say* > > ?If you can understand that, then syntax is not all that?relevant to?human understanding at a fundamental level. A similarly scrambled?statement in any?scripting language?that I can think of would have caused the program to halt. Yet your brain takes it stride and understands. *This* is what I think is fascinating. A human natural language is not like a programming language but more like a complex application written in the programming language. The programming language is like the genetic code, which cannot cope with syntactical errors. And the genetic code is itself an example of an abstract program which, when implemented, gives rise to intelligence and consciousness. -- Stathis Papaioannou From max at maxmore.com Thu Jan 14 05:38:36 2010 From: max at maxmore.com (Max More) Date: Wed, 13 Jan 2010 23:38:36 -0600 Subject: [ExI] The Nature of Technology: What It Is and How It Evolves(Brian Arthur) Message-ID: <201001140538.o0E5cqqT003998@andromeda.ziaspace.com> >Is there something particular you disagreed with in this interview? No. >Or it's just lightweight? Yes, very. And surprisingly so. It IS just an interview, so maybe the book is great. That's what I'm hoping to find out before actually reading it. (Too many books already in the stack.) Max From bbenzai at yahoo.com Thu Jan 14 12:05:21 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 14 Jan 2010 04:05:21 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <296296.66105.qm@web113614.mail.gq1.yahoo.com> Gordon Swobe claimed: > Here's the classic one-line program: > > print "Hello World" ... > > And by the way the most sophisticated program possible on a > s/h system will differ in no philosophically important way > from this one. That's just about the most ridiculous thing I've heard anyone say on this list. Or anywhere, for that matter. (I think it may be symptomatic of 'Terminal Confusion'!) Ben Zaiboc From gts_2000 at yahoo.com Thu Jan 14 13:07:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 05:07:15 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <876885.6117.qm@web65609.mail.ac4.yahoo.com> Message-ID: <62712.42339.qm@web36503.mail.mud.yahoo.com> --- On Wed, 1/13/10, The Avantguardian wrote: > Yet your brain takes it stride and understands... > *This* is what I think is fascinating.... > ...But there are things that the universe does for > which there are not yet words, symbols, or concepts. Right, Stuart! The human brain does things that software/hardware systems cannot and will never do. Its method remains a mystery for now, and that mystery makes some of us uncomfortable, but our discomfort does not change the facts. -gts From stathisp at gmail.com Thu Jan 14 13:28:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jan 2010 00:28:09 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <62712.42339.qm@web36503.mail.mud.yahoo.com> References: <876885.6117.qm@web65609.mail.ac4.yahoo.com> <62712.42339.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/15 Gordon Swobe : > --- On Wed, 1/13/10, The Avantguardian wrote: > >> Yet your brain takes it stride and understands... >> *This* is what I think is fascinating.... >> ...But there are things that the universe does for >> which there are not yet words, symbols, or concepts. > > Right, Stuart! > > The human brain does things that software/hardware systems cannot and will never do. Its method remains a mystery for now, and that mystery makes some of us uncomfortable, but our discomfort does not change the facts. You've previously implied that S/H systems *can* do everything the brain can, short of consciousness. Otherwise, zombies would be impossible. -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 14 13:37:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 05:37:59 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <2573.11215.qm@web36506.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > The problem is that you can't explain how humans get their > understanding. I can explain how they do not get their understanding, which brings us one step closer to understanding how they do. > It doesn't help to say that some physical activity > happens in neurons which produces the understanding, not > because you haven't given the details of the physical activity, but > because you haven't explained how, in general terms, it is possible for > the physical activity in a brain to pull off that trick but not > the physical activity in a computer. But I have. You just don't believe me or understand me or both. > Even if it's true that computers only do syntax and syntax can't > produce meaning (it isn't, since logically there is nowhere else for > meaning to come from) I think that last thought of yours needs some work. :) You say "logically there is nowhere else for meaning to come from", but *logically* nothing can get semantics from knowing rules of syntax, or vocabulary from knowing rules of grammar. Instead of accepting an illogical answer to the question of meaning as you seem wont to do, I submit that the only logical choice is to accept that brains do something we don't yet fully understand. That leaves us with a bit of a mystery, but at least we haven't sacrificed logic to get there. We would be pretty arrogant to pretend that we fully understand the human brain in 2010. We don't yet know even why George Foreman fell down in the 8th round against Muhammad Ali. Neuroscience is still in its infancy. -gts From gts_2000 at yahoo.com Thu Jan 14 13:49:05 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 05:49:05 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <822204.96676.qm@web36508.mail.mud.yahoo.com> --- On Thu, 1/14/10, Stathis Papaioannou wrote: > You've previously implied that S/H systems *can* do > everything the brain can, short of consciousness. That last thing you mention seems pretty important. -gts From stathisp at gmail.com Thu Jan 14 14:57:43 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jan 2010 01:57:43 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <2573.11215.qm@web36506.mail.mud.yahoo.com> References: <2573.11215.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/15 Gordon Swobe : >> It doesn't help to say that some physical activity >> happens in neurons which produces the understanding, not >> because you haven't given the details of the physical activity, but >> because you haven't explained how, in general terms, it is possible for >> the physical activity in a brain to pull off that trick but not >> the physical activity in a computer. > > But I have. You just don't believe me or understand me or both. You've said that formal programs can't produce understanding but physical activity can produce understanding. Computers not only run formal programs, they also do physical activity. You have a hunch that the sort of physical activity in computers is incapable of producing understanding. But a hunch is not good enough in a philosophical argument. To make your case you have to show that it is *impossible* for the physical activity in a computer to support understanding. For example, you would have to show that running a program actually prevents understanding. That would mean that electric circuits arranged in a complex and disorganised fashion that could not be seen as implementing a program could potentially have understanding but not if the same components were organised to form a computer. Is that right? >> Even if it's true that computers only do syntax and syntax can't >> produce meaning (it isn't, since logically there is nowhere else for >> meaning to come from) > > I think that last thought of yours needs some work. :) > > You say "logically there is nowhere else for meaning to come from", but *logically* nothing can get semantics from knowing rules of syntax, or vocabulary from knowing rules of grammar. It's true that given an unknown string of symbols it's impossible, even in principle, to work out their meaning even though you may be able to work out a syntax. However, you can ground the symbols by associating them with symbols you already know, a syntactic operation. And ultimately the symbols you already know are grounded by associating them with sense data, another syntactic operation. So syntax is both necessary and sufficient for semantics. How else can any entity, human or computer, possibly derive the meaning of something other than through a process like this? And my original point: even if you still believe meaning must come from the physical processes inside a brain, why can't it also come from the physical processes inside a computer? -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 14 14:33:07 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 06:33:07 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <822204.96676.qm@web36508.mail.mud.yahoo.com> Message-ID: <935338.61673.qm@web36504.mail.mud.yahoo.com> Stathis Papaioannou wrote: > You've previously implied that S/H systems *can* do > everything the brain can, short of consciousness. This might be a good time to mention my view about the difference between "qualia" and "quality of consciousness". Following Chalmers, I once considered qualia as something like phenomenal entities contained by consciousness, as if consciousness could exist sans qualia. I think it better now to consider qualia as qualities of a single unified conscious experience, which is after all what qualia really means anyway. The experience of understanding words (of having semantics) counts as a quality of conscious experience. It's that experience among others that I think s/h systems cannot have. Point being that consciousness and semantics do not exist as independent concepts. You can't have the potential for one without the potential for the other in humans, but I think someday we'll simulate the appearance of both in s/h systems (weak AI). -gts From gts_2000 at yahoo.com Thu Jan 14 15:17:24 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 07:17:24 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <805502.70698.qm@web36505.mail.mud.yahoo.com> --- On Thu, 1/14/10, Stathis Papaioannou wrote: > You've said that formal programs can't produce > understanding but physical activity can produce understanding. > Computers not only run formal programs, they also do physical activity. > You have a hunch that the sort of physical activity in computers is > incapable of producing understanding. But a hunch is not good enough in a > philosophical argument. I don't consider it a "hunch". I look at programs (and I write them) and I look at the hardware that implements them (and I work on that too) and I see only syntactical form-based operations. And I understand and agree with those who say syntax cannot give semantics, that grammar cannot give vocabulary. It's last point on which we disagree. You want to believe that performing form-based syntactic operations in software or hardware will magically give rise to human-like understanding. gotta run. more later -gts From emlynoregan at gmail.com Thu Jan 14 15:29:50 2010 From: emlynoregan at gmail.com (Emlyn) Date: Fri, 15 Jan 2010 01:59:50 +1030 Subject: [ExI] Simulation argument in Dinosaur Comics Message-ID: <710b78fc1001140729n1f60ac75q9d8917659054ab71@mail.gmail.com> http://www.qwantz.com/index.php?comic=1623 Nick Bostrom's cool just jumped up a level, what with being referred to by name by T-Rex. And he was already quite cool, by all accounts. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From natasha at natasha.cc Thu Jan 14 17:05:17 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 14 Jan 2010 11:05:17 -0600 Subject: [ExI] "Future Society" China - Max More, Cyrille Jegu, and Natasha Message-ID: <7EB4023A822F44118A6610C6A70A8F0B@DFC68LF1> "Future Society" was broadcast live on Tuesday. Here is a link http://english.cri.cn/08webcast/today.htm It was a living panel and Transhumanism, the Proactionary Principle, and Human Enhancement were the foci of discussion. Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From natasha at natasha.cc Thu Jan 14 17:09:26 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 14 Jan 2010 11:09:26 -0600 Subject: [ExI] Overposting to List (RE: Meaningless Symbols) In-Reply-To: <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> References: <331623.31673.qm@web36501.mail.mud.yahoo.com><08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> Message-ID: John, please be careful about over posting. Thank you, Natasha Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From jonkc at bellsouth.net Thu Jan 14 17:08:27 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 14 Jan 2010 12:08:27 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <399013.3045.qm@web36505.mail.mud.yahoo.com> References: <399013.3045.qm@web36505.mail.mud.yahoo.com> Message-ID: <1D0D195E-39CE-47C4-8CF2-5E3CF9FA5AB3@bellsouth.net> On Jan 13, 2010, Gordon Swobe wrote: > Forget about the CR. Neither of us care if parts of the system understand anything. We want to know if the system as a whole knows Chinese from manipulating Chinese symbols according to rules of syntax. It cannot, because syntax only tells the system 'what' to put 'where' and 'when'. DNA used "formal rules of syntax" that you have such contempt for to tell just 20 different types of small Amino Acids to go to certain very specific positions until they had formed something called "Gordon Swobe". As for the semantics of it, that is in the eye of the beholder not intrinsic to it. > The system looks at the forms of things, not at the meanings of things. You keep making the exact same error over and over again; you look at something that is grand and complex and break it down into smaller and smaller parts until you find that the part you're looking at is not very grand or complex at all, and then you announce that this proves that there must be some secret mysterious key ingredient that is missing from the analysis. But that's just silly, analysis is the process of breaking a complex topic or substance down into smaller parts to gain a better understanding of it; if the part is still mysterious then it's still too big and you need to break it down some more. On and off is not mysterious at all so I claim victory, you think that very lack of puzzlement is a sign of failure. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Jan 14 18:07:21 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 14 Jan 2010 13:07:21 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <321375.66546.qm@web36501.mail.mud.yahoo.com> References: <321375.66546.qm@web36501.mail.mud.yahoo.com> Message-ID: <763F1427-3EDA-47E0-AEC4-C245825F7A7B@bellsouth.net> On Jan 13, 2010, Gordon Swobe wrote: > The Englishman stands naked in a field. He represents the entire system. He and his neurons (trillions upon trillions upon trillions of them if you like) process Chinese symbols according to the *syntactic rules specified in a program* which he and his neurons have memorized. Show me who or what understands the meanings of the symbols. The trouble with all your thought experiments is that you come up with more and more bizarre situations and just say there is (or is not) understanding there, and then you challenge us to explain it; but you have no way of knowing what the little man understands either before or after he swallowed a book larger than the observable universe, all you can know is if he does (or does not) act intelligently. When Einstein did thought experiments everything followed logically, he didn't announce that the man on the train platform saw this and that unless it was obvious that is exactly what he would see, you are announcing things that a far from obvious and sometimes announcing the very thing you are trying to prove. > If you cannot then you agree with your 7th grade English teacher who knew that following the rules of grammar (syntax) is not the same as understanding the meanings of the words (semantics). Things may not be quite as clear cut as your 7th grade English teacher thought. I doubt if things in the quantum realm are much concerned with your teacher's opinion, and that's all semantics is, an opinion. If the Many Worlds interpretation of Quantum Mechanics is correct then in one of those worlds that has a different syntax it would be the opinion of the inhabitants that this exact same post is the operating instructions for a new type of aquarium air pump. And the syntax of your genome is very clear and specific, but tell me about its semantics. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Thu Jan 14 19:54:51 2010 From: scerir at libero.it (scerir) Date: Thu, 14 Jan 2010 20:54:51 +0100 (CET) Subject: [ExI] links only Message-ID: <11649444.861021263498891904.JavaMail.defaultUser@defaultHost> Elvis transhumanist? http://www.3quarksdaily.com/3quarksdaily/2010/01/suspicious-minds-wonder-was- elvis-a-transhumanist.html Nietzsche and the posthuman http://www.3quarksdaily.com/3quarksdaily/2010/01/nietzsche-and-our-posthuman- future.html Murray at home http://thesciencenetwork.org/programs/santa-fe-institute-2009/murray-gell- mann-at-home From max at maxmore.com Thu Jan 14 20:27:54 2010 From: max at maxmore.com (Max More) Date: Thu, 14 Jan 2010 14:27:54 -0600 Subject: [ExI] Beyond Beijing radio online link Message-ID: <201001142028.o0EKS34O011346@andromeda.ziaspace.com> We heard some good feedback on this interview. If you like listening to radio online... http://english.cri.cn/7146/2010/01/13/481s542097.htm Hour 1 is the one to listen to. ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From thespike at satx.rr.com Thu Jan 14 21:41:03 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 14 Jan 2010 15:41:03 -0600 Subject: [ExI] Art for Art's sake (well, for Damien's, anyway) Message-ID: <4B4F8F6F.8060602@satx.rr.com> Any ace Photoshoppers here who'd care to consider splicing a b&w mugshot into a Hubble skyscape (or the like) for a book cover? Would have to be done for the merriment of the thing, plus a cover credit line, but no pay, alas. Contact me offlist, pliz. Unless you can find some way to bring the Chinese Room into the thread. Damien Broderick From spike66 at att.net Thu Jan 14 22:29:15 2010 From: spike66 at att.net (spike) Date: Thu, 14 Jan 2010 14:29:15 -0800 Subject: [ExI] links only In-Reply-To: <11649444.861021263498891904.JavaMail.defaultUser@defaultHost> References: <11649444.861021263498891904.JavaMail.defaultUser@defaultHost> Message-ID: <8953F80685A441A7A1C1AB1F26B9FCD1@spike> > ...On Behalf Of scerir > Subject: [ExI] links only > > Elvis transhumanist? > ... scerir Excellent! RU Sirius used to post on ExI a long time ago. Anyone here friends with him? Do invite him to drop in. As a huge Elvis fan, I do agree, he was a trendsetter with a one-in-a-billion voice. spike From bbenzai at yahoo.com Thu Jan 14 22:13:15 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 14 Jan 2010 14:13:15 -0800 (PST) Subject: [ExI] atheism In-Reply-To: Message-ID: <934175.30057.qm@web113604.mail.gq1.yahoo.com> Speaking of Atheism, this cracked me up: http://funnyatheism.com/videos/flood-problems Although I must admit, I've never heard of the angel Geoffrey. Ben Zaiboc From lcorbin at rawbw.com Thu Jan 14 23:40:14 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 14 Jan 2010 15:40:14 -0800 Subject: [ExI] Seeing Through New Lenses In-Reply-To: References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> Message-ID: <4B4FAB5E.7060302@rawbw.com> Jef writes > Forwarding an article with relevance to the Extropy list. It's ironic > that in a forum dedicated to the topic of "extropy" , which might be > interpreted as increasingly meaningful increasing change, there's such > vocal support for hard rationality, simple truth, and seeing the world > the way way it "really is." I haven't noticed that, lately. But then... that probably supports your contention! :) > Edge Perspectives with John Hagel: Relationships and Dynamics - Seeing > Through New Lenses > > while Westerners are more oriented to reductionist views. This basic > difference plays out in fascinating ways, including the greater > attention by East Asian children to verbs while Western children tend > to learn nouns faster. > > One very tangible illustration of this is a simple test reported by > Nisbett. A developmental psychologist showed three pictures to > children ? a cow, a chicken and some grass. He asked children from > America which two of the pictures belonged together. Most of them > grouped the cow and chicken together because they were both objects in > the same category of animals. Chinese children on the other hand > tended to group the cow and grass together because ?cows eat grass? ? > they focused on the relationship between two objects rather than the > objects themselves. > > [Which of these do YOU prefer? Which of these do you think is closer > to The Truth? Why? - Jef] Naturally, it shouldn't be a matter of preference, per se, but it is interesting which is more salient to one. I along with most westerners (in line with the article's point) would tend to lump the cow and the chicken together "as animals", surrendering to abstract categorization. Have you read Flynn's book "What is intelligence?". In short, he tries to explain the Flynn effect as greater practice (especially among westerners, I suppose) at *decontextualizing*. This process causes abstract categories to come more quickly to mind than before. Hence, people today do better on IQ tests precisely because of their greater familiarity and facility with abstract categories, e.g. Animal, Mineral, or Vegetable. What is amazing about your cite is that it flies very much in the face of this. After all, east Asians are no slouches on IQ tests, and if the tests are being explicitly designed---as Flynn would have us believe ---to measure decontextualization, then this "cows eat grass" answer goes against this supposed insidious designing of the IQ tests. I have no idea how to draw a bottom line to all this, however, except to say that just as many of Tversky and Kahneman's errors arise because humans today find themselves in a very different environment from which they evolved, most of the intellectual tasks demanded of people today (that are relatively new), would also seem to demand the ability to quickly decontextualize. (For example, how many times does the word "of" occur in a given sentence, or how many zeros are there in a given binary string.) Prediction: citified Chinese will go for the categorization in the "cows, chickens, and grass" more than will the countryside Chinese. Flynn has many amusing anecdotes about the way rural people resist answering IQ challenge questions in the hoped-for abstract way, and resort to common sense relationships instead---which actually would include "cows eat grass". Very puzzling. Lee > I found this intriguing in the context of our continuing work at the > Center for the Edge on the Big Shift. As I indicated in a previous > posting, the Big Shift is a movement from a world where value creation > depends on knowledge stocks to one where value resides in knowledge > flows ? in other words, objects versus relationships. Our Western way > of perceiving has been very consistent with a world of knowledge > stocks and short-term transactions. As we move into a world of > knowledge flows, though, I suspect the East Asian focus on > relationships may be a lot more helpful to orient us (no pun > intended). > > ... From stathisp at gmail.com Fri Jan 15 00:59:14 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jan 2010 11:59:14 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <805502.70698.qm@web36505.mail.mud.yahoo.com> References: <805502.70698.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/15 Gordon Swobe : > --- On Thu, 1/14/10, Stathis Papaioannou wrote: > >> You've said that formal programs can't produce >> understanding but physical activity can produce understanding. >> Computers not only run formal programs, they also do physical activity. >> You have a hunch that the sort of physical activity in computers is >> incapable of producing understanding. But a hunch is not good enough in a >> philosophical argument. > > I don't consider it a "hunch". I look at programs (and I write them) and I look at the hardware that implements them (and I work on that too) and I see only syntactical form-based operations. And I understand and agree with those who say syntax cannot give semantics, that grammar cannot give vocabulary. > > It's last point on which we disagree. You want to believe that performing form-based syntactic operations in software or hardware will magically give rise to human-like understanding. I look at minds and I look at the hardware that implements them, and all I see is neurons firing according to mindless rules. I can't say it's obvious to me how this leads to either intelligence or consciousness. The code the brain uses to represent objects in the real world and concepts is much less well understood than the code computers use, but it is a code, and ultimately all codes are arbitrary. Presumably for the brain you don't believe the code or the algorithm implemented by neural networks firing gives rise to understanding, but rather something intrinsic to the matter or the way the matter behaves. So how are computers disadvantaged here? They too use a code and implement algorithms, and they too contain matter engaged in physical activity. -- Stathis Papaioannou From max at maxmore.com Fri Jan 15 06:47:48 2010 From: max at maxmore.com (Max More) Date: Fri, 15 Jan 2010 00:47:48 -0600 Subject: [ExI] atheism Message-ID: <201001150702.o0F72W9Q014431@andromeda.ziaspace.com> Ben Zaiboc wrote: >Speaking of Atheism, this cracked me up: > >http://funnyatheism.com/videos/flood-problems > >Although I must admit, I've never heard of the angel Geoffrey. That was excellent, thanks. I also enjoyed "Gay Scientists Isolate Christian Gene" http://funnyatheism.com/videos/gay-scientists-isolate-christian-gene From gts_2000 at yahoo.com Fri Jan 15 13:06:28 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 15 Jan 2010 05:06:28 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <213685.55239.qm@web36502.mail.mud.yahoo.com> --- On Thu, 1/14/10, Stathis Papaioannou wrote: > Presumably for the brain you don't believe the code or the > algorithm implemented by neural networks firing gives rise > to understanding, If the brain uses code or implements algorithms at all (and it probably does not) then it must do something else besides. The computationalist theory of mind simply fails to explain the facts. Even if we can compute the brain, this would not mean that the brain actually does computations. > but rather something intrinsic to the matter or the way the matter > behaves. Right. The matter in the brain does something more sophisticated than the mere running of programs. Science does not yet understand it well just as it does not yet understand many, many, many things. I think that eventually neuroscience and the philosophy of mind will merge into one field -- that neuroscientists will come to see that they hold in their hands the answers to these questions of philosophy. It has already started if you look with open eyes: neuroscientists have produced antidepressant drugs that brighten mood, a quality of consciousness, and drugs that alleviate pain, another quality of consciousness, and so on and so on. People would understand this obvious link between science and philosophy except that they're still unwitting slaves to the vocabulary and concepts of mind/matter duality left over from the time of Descartes. People cannot see what should be blindingly obvious: that consciousness exists as part of the physical world, as a high level process of the physical brain. > So how are computers disadvantaged here? They can't get semantics from syntax anymore than you can. -gts From stefano.vaj at gmail.com Fri Jan 15 14:34:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 15 Jan 2010 15:34:41 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <213685.55239.qm@web36502.mail.mud.yahoo.com> References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> 2010/1/15 Gordon Swobe : > If the brain uses code or implements algorithms at all (and it probably does not) then it must do something else besides. It may surprise you, but I have implemented in my own brain at a very young age the algorithm allowing me to multiply integers of arbitrary length. But perhaps I am not really computing numbers at all, it's all an illusion, the solution actually gets communicated to me from another dimension through some unknown form of quantum-based communication mechanism... :-) -- Stefano Vaj From gts_2000 at yahoo.com Fri Jan 15 15:32:04 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 15 Jan 2010 07:32:04 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> Message-ID: <129394.58494.qm@web36507.mail.mud.yahoo.com> --- On Fri, 1/15/10, Stefano Vaj wrote: > It may surprise you, but I have implemented in my own brain > at a very young age the algorithm allowing me to multiply integers of > arbitrary length. Nobody questions that we can intentionally run algorithms in our brains. But how can you come to know the meanings of symbols merely by virtue of running programs in your mind that manipulate symbols according to syntactic rules as computer programs actually do? You can't. Less easy to see is that your computer calculates the answers to mathematical questions without understanding the meanings of the numbers. I think you'll agree that as you count to ten on your fingers, your mind but not your fingers understand the numbers. We invented calculators because we needed to count to eleven. They don't understand numbers either. -gts From aware at awareresearch.com Fri Jan 15 16:13:23 2010 From: aware at awareresearch.com (Aware) Date: Fri, 15 Jan 2010 08:13:23 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: <129394.58494.qm@web36507.mail.mud.yahoo.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> Message-ID: On Fri, Jan 15, 2010 at 7:32 AM, Gordon Swobe wrote: > --- On Fri, 1/15/10, Stefano Vaj wrote: > >> It may surprise you, but I have implemented in my own brain >> at a very young age the algorithm allowing me to multiply integers of >> arbitrary length. > > Nobody questions that we can intentionally run algorithms in our brains. But how can you come to know the meanings of symbols merely by virtue of running programs in your mind that manipulate symbols according to syntactic rules as computer programs actually do? You can't. > > Less easy to see is that your computer calculates the answers to mathematical questions without understanding the meanings of the numbers. It's funny in a meta way that all this discussion undoubtedly reflects the true nature of the participants, but represents no "true understanding" at this level of the system either. Gordon, I'll say it again: What you're you're seeking to grasp is not ontological; it's epistemological. - Jef From stefano.vaj at gmail.com Fri Jan 15 17:27:20 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 15 Jan 2010 18:27:20 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <129394.58494.qm@web36507.mail.mud.yahoo.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> Message-ID: <580930c21001150927o3db292c5n7c5428669adc81e1@mail.gmail.com> 2010/1/15 Gordon Swobe : > Nobody questions that we can intentionally run algorithms in our brains. But how can you come to know the meanings of symbols merely by virtue of running programs in your mind that manipulate symbols according to syntactic rules as computer programs actually do? You can't. The "meaning" of a symbol is nothing else than its association with another symbol (say, the 'meaning assigned to "x" in this equation is "3"'). What is there in such a trivial transaction which would not be "algorithmic"? :-/ -- Stefano Vaj From jonkc at bellsouth.net Fri Jan 15 18:01:53 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 15 Jan 2010 13:01:53 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <296296.66105.qm@web113614.mail.gq1.yahoo.com> References: <296296.66105.qm@web113614.mail.gq1.yahoo.com> Message-ID: <34B0964D-C0F6-4C70-8AFB-CFE1DA575B9A@bellsouth.net> Gordon Swobe : > Here's the classic one-line program: > > print "Hello World" The title of this thread is "Meaningless Symbols", if "print" was one of those to the computer then it would not do exactly the same thing each time it encountered that symbol, instead it would do some arbitrary thing. Apparently the computer ascribed meaning to at least one of those "meaningless symbols". John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at m056832107.syzygy.com Fri Jan 15 18:23:13 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 15 Jan 2010 18:23:13 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <129394.58494.qm@web36507.mail.mud.yahoo.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> Message-ID: <20100115182313.5.qmail@syzygy.com> Gordon writes: >Less easy to see is that your computer calculates the answers to > mathematical questions without understanding the meanings of the > numbers. My computer certainly doesn't understand the beauty of clouds clinging to a mountainside, even if I'm using it to process a photograph of those clouds. My computer does understand numbers, though. That understanding is hardwired into adder circuits in the CPU. The fact that it comes up with the correct answers for the following: 0 + 0 = 0 1 + 1 = 2 indicates that it understands the fundamental difference between the numbers zero and one. The essential zeroness and oneness are captured in the behavior of addition. The syntax of those two statements above is identical, but the fact that the first and last symbols in the first line must be the same results from the zeroness of the first symbol. I think what you keep coming back to is the concept of "qualia". You say that simulated water doesn't get your computer wet. Well, that's just blindingly obvious, but I think what you're really concerned with is where the wetness quale is. When you talk of "symbol grounding", you are asking about the qualia for those symbols. When you talk of the neural correlates of consciousness, you're asking what qualia are built out of. When you assert that syntax can never create semantics, you're really asserting that syntax cannot create qualia. Does this sound right to you? Well, I'd like to assert that qualia are symbols, just like the other symbols that your brain manipulates. The redness quale may not be the same symbol as the word red, but they're closely associated. You can say "I am seeing red" when the redness quale symbol is active. There's nothing particularly mysterious about qualia symbols. When red light hits your retina, a pattern of neural firing occurs in your brain, part of which represents the red quale symbol. Just like other symbols in your brain, qualia can be represented as data and moved to other substrates. We don't yet know how the brain encodes these symbols, but we can be fairly confident that that encoding is represented by the synaptic connections between neurons, and not by ATP molecules in mitochondria. -eric From thespike at satx.rr.com Fri Jan 15 18:43:48 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jan 2010 12:43:48 -0600 Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100115182313.5.qmail@syzygy.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> Message-ID: <4B50B764.8010003@satx.rr.com> On 1/15/2010 12:23 PM, Eric Messick wrote: > My computer does understand numbers, though. > > That understanding is hardwired into adder circuits in the CPU. > > The fact that it comes up with the correct answers for the following: > > 0 + 0 = 0 > 1 + 1 = 2 > > indicates that it understands the fundamental difference between the > numbers zero and one. I just poured 3 cups of water into a 2 cup jar. Does the fact that it stopped accepting water after I'd put in 2 cups and overflowed the rest mean it *understands* 3>2? Then I put a 1 foot rule next to a book and the 9 matched up with the top of the book. Did the rule *understand* how tall the book is? Computer programs understand nothing more than that. This all reminds me of the behaviorist idiocy of the 1950s. Damien Broderick From eric at m056832107.syzygy.com Fri Jan 15 19:38:45 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 15 Jan 2010 19:38:45 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B50B764.8010003@satx.rr.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> Message-ID: <20100115193845.5.qmail@syzygy.com> Damien writes: >I just poured 3 cups of water into a 2 cup jar. Does the fact that it >stopped accepting water after I'd put in 2 cups and overflowed the rest >mean it *understands* 3>2? Then I put a 1 foot rule next to a book and >the 9 matched up with the top of the book. Did the rule *understand* how >tall the book is? No, I wouldn't attribute understanding to those objects. You're not describing the manipulation of symbols here, but of physical objects. Understanding of symbols is a symbolic operation. > Computer programs understand nothing more than that. I'll agree that understanding of zeroness and oneness is a very basic thing. The adder that encodes that understanding is a very simple circuit, so it's level of understanding must be very simple. Your brain is much more complicated, so it can understand much more complicated things. >This all reminds me of the behaviorist idiocy of the 1950s. Sounds like you've got a problem with behaviorist descriptions. Can you explain? -eric From thespike at satx.rr.com Fri Jan 15 19:49:08 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jan 2010 13:49:08 -0600 Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100115193845.5.qmail@syzygy.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> <20100115193845.5.qmail@syzygy.com> Message-ID: <4B50C6B4.1010404@satx.rr.com> On 1/15/2010 1:38 PM, Eric Messick wrote: > Sounds like you've got a problem with behaviorist descriptions. Can > you explain? I don't have to. Chomsky did it in 1959 when he killed Skinner with a single review. [reprinted http://www.chomsky.info/articles/1967----.htm ] Damien Broderick From possiblepaths2050 at gmail.com Fri Jan 15 19:50:07 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 15 Jan 2010 12:50:07 -0700 Subject: [ExI] Overposting to List (RE: Meaningless Symbols) In-Reply-To: References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> Message-ID: <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> Hi Natasha, I don't understand about the warning. I have hardly posted at all over the last month or so. And so I plead innocent! hee But I'm sure I will offend down the road. ; ) Oh, have you seen "The Singularity is Near" movie yet, or "The Transcendant Man" documentary? What did you think of them?? Oh, and did you like Avater? I admit it felt like Dances with Wolves in Space, but I don't care because it still enthralled me with the wonder and romance of a strange world. I did think that for the mid 22nd century the military technology of humanity was a joke. But hey, it's a minor quibble. A plotline having a military nano utility fog devouring the poor beleagered aliens within 2 seconds flat would not have been very sporting... I'm playing with the idea (if things work out...) of relocating to NYC to take over my father's $325 two bedroom rent control apt. (two miles away from the site of the WTC). I am not in love with the idea of living in the Big Apple but I suppose there is lots of opportunity there. I admit to loving the cultural/nightlife activities of the Phoenix Valley, it's just the hot climate (and *mindlessly* conservative political climate...) that I hate. And here women roll their eyes at me because I don't have a car (Phoenix is totally a car place!). As I understand it, cars are very optional for even middle class people in NYC, due to the massive parking headache. Punch Max in the arm for me! : ) Warm wishes, John 2010/1/14 Natasha Vita-More > John, please be careful about over posting. > > Thank you, > Natasha > > > [image: Nlogo1.tif] Natasha Vita-More > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From thespike at satx.rr.com Fri Jan 15 20:09:21 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jan 2010 14:09:21 -0600 Subject: [ExI] NY NY it's a wonderful town In-Reply-To: <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> Message-ID: <4B50CB71.5090907@satx.rr.com> On 1/15/2010 1:50 PM, John Grigg wrote: > I'm playing with the idea (if things work out...) of relocating to NYC > to take over my father's $325 two bedroom rent control apt. (two miles > away from the site of the WTC). I am not in love with the idea of > living in the Big Apple Are you insane? Hey, I'll take it off your hands... :) Damien Broderick From mlatorra at gmail.com Fri Jan 15 21:44:47 2010 From: mlatorra at gmail.com (Michael LaTorra) Date: Fri, 15 Jan 2010 14:44:47 -0700 Subject: [ExI] NY NY it's a wonderful town In-Reply-To: <4B50CB71.5090907@satx.rr.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> <4B50CB71.5090907@satx.rr.com> Message-ID: <9ff585551001151344n7cc79ac4k4627a515e83228ab@mail.gmail.com> John, if you do this, sub-let [rent] the other bedroom. If you're in a good neighborhood, it's worth far more than the rent you're paying. For example: 2 Bedrooms $1700 828 sq. ft.1 Bath(s) http://new-york-city.apartmenthomeliving.com/apartments-for-rent/2-bedroom/from-600 New York is a great city. Great and terrible. Extremely expensive. Extremely exciting. I was born there. Best of luck! Mike LaTorra On Fri, Jan 15, 2010 at 1:09 PM, Damien Broderick wrote: > On 1/15/2010 1:50 PM, John Grigg wrote: > > I'm playing with the idea (if things work out...) of relocating to NYC >> to take over my father's $325 two bedroom rent control apt. (two miles >> away from the site of the WTC). I am not in love with the idea of >> living in the Big Apple >> > > Are you insane? Hey, I'll take it off your hands... :) > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Fri Jan 15 21:45:27 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 15 Jan 2010 16:45:27 -0500 Subject: [ExI] Overposting to List (RE: Meaningless Symbols) In-Reply-To: <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> Message-ID: <20100115164527.dfvm9n1k0gwo88wc@webmail.natasha.cc> It is John Clark who overposted, not you. > Oh, have you seen "The Singularity is Near" movie yet, or "The Transcendant > Man" documentary? What did you think of them?? No. I heard they weren't well done. > Oh, and did you like > Avater? Will be seeing Avatar very soon! Natasha From natasha at natasha.cc Fri Jan 15 21:48:28 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 15 Jan 2010 16:48:28 -0500 Subject: [ExI] NY NY it's a wonderful town In-Reply-To: <9ff585551001151344n7cc79ac4k4627a515e83228ab@mail.gmail.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> <4B50CB71.5090907@satx.rr.com> <9ff585551001151344n7cc79ac4k4627a515e83228ab@mail.gmail.com> Message-ID: <20100115164828.9f38bvr1cgocs0wo@webmail.natasha.cc> I was born there too! I love NY! Quoting Michael LaTorra : > John, if you do this, sub-let [rent] the other bedroom. If you're in a good > neighborhood, it's worth far more than the rent you're paying. For example: > > 2 Bedrooms $1700 828 sq. ft.1 Bath(s) > http://new-york-city.apartmenthomeliving.com/apartments-for-rent/2-bedroom/from-600 > > New York is a great city. Great and terrible. Extremely expensive. Extremely > exciting. I was born there. > > Best of luck! > > Mike LaTorra > > On Fri, Jan 15, 2010 at 1:09 PM, Damien Broderick > wrote: > >> On 1/15/2010 1:50 PM, John Grigg wrote: >> >> I'm playing with the idea (if things work out...) of relocating to NYC >>> to take over my father's $325 two bedroom rent control apt. (two miles >>> away from the site of the WTC). I am not in love with the idea of >>> living in the Big Apple >>> >> >> Are you insane? Hey, I'll take it off your hands... :) >> >> Damien Broderick >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > From eric at m056832107.syzygy.com Fri Jan 15 22:16:55 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 15 Jan 2010 22:16:55 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B50C6B4.1010404@satx.rr.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> <20100115193845.5.qmail@syzygy.com> <4B50C6B4.1010404@satx.rr.com> Message-ID: <20100115221655.5.qmail@syzygy.com> Damien writes: >On 1/15/2010 1:38 PM, Eric Messick wrote: > >> Sounds like you've got a problem with behaviorist descriptions. Can >> you explain? > >I don't have to. Chomsky did it in 1959 when he killed Skinner with a >single review. > >[reprinted http://www.chomsky.info/articles/1967----.htm ] I haven't seen the book Chomsky is reviewing, and I only skimmed the review, but I saw no reason to disagree with Chomsky. Chomsky points out that Skinner is trying to stretch the results of simple scientific experiments on rats and pigeons to cover linguistic activity of humans, and that Skinner looses specificity by doing this. Chomsky also gives the impression that Skinner is ignoring much of the internal state of a complex organism, again resulting in a vague lack of specificity: Chomsky: One would naturally expect that prediction of the behavior of a complex organism (or machine) would require, in addition to information about external stimulation, knowledge of the internal structure of the organism, the ways in which it processes input information and organizes its own behavior. I'm not sure how my discussion of a CPU adder understanding the difference between zeroness and oneness prompted your remark about behaviorism. I'd say I was a behaviorist only in the weak sense that our access to the internal state of a brain is only currently available through observing behavior. I can't imagine that Skinner would claim that the previous life history of a complex organism had no bearing on current complex behavior of that organism, but Chomsky seems to be implying that. In any case, I think that the internal state built up by life experience is crucial to complex behavior, even if simple behavior can be successfully molded by simple conditioning, as Skinner's experiments show. So: The meaning of a symbol in a brain is encoded in the interconnections of the neurons which activate when that symbol is active. We can probe the meaning of a symbol by observing the behavior of the processor. Processing elements in a CPU are connected such that certain symbols mean zero and one. We can probe that meaning by observing how the CPU adds numbers. Behavior at this level is simple enough that we're still in the realm where Chomsky wouldn't be criticizing Skinner about specificity. -eric From scerir at libero.it Sat Jan 16 00:03:28 2010 From: scerir at libero.it (scerir) Date: Sat, 16 Jan 2010 01:03:28 +0100 (CET) Subject: [ExI] Meaningless Symbols. Message-ID: <30151309.987061263600208512.JavaMail.defaultUser@defaultHost> >> My computer does understand numbers, though. There is some hidden semantics, in my calculator too, and in any calculator I must say. Possibly the signature of some Great Programmer? Punch any three digits into your calculator. Then punch in the same three again. No matter which digits you choose, the resulting six-digit number will be exactly divisible by 13, that result divisible by 11, and the last result by 7. (And, of course, you will end up with the same three-digit number you started with.) From spike66 at att.net Fri Jan 15 23:57:39 2010 From: spike66 at att.net (spike) Date: Fri, 15 Jan 2010 15:57:39 -0800 Subject: [ExI] have you ever seen anything like this? Message-ID: <66893462A8AB44BA8E07C37D33BAC07D@spike> I have heard of humans who get caught up in the emotion of the battle, suicidal rage etc, but I don't think I have ever seen it in any other beast. Here a woodpecker keeps coming back to fight, clearly not for self defense or with any hope of actually devouring the serpent, but rather just to injure or slay it: http://www.youtube.com/watch?v=14yxYTOdL38 spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Jan 16 00:28:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 15 Jan 2010 16:28:49 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <580930c21001150927o3db292c5n7c5428669adc81e1@mail.gmail.com> Message-ID: <417975.19906.qm@web36503.mail.mud.yahoo.com> Stefano, If you watch a small child solve a math problem on his fingers then you will watch a fellow human use a simple calculator to facilitate and extend his mathematical understanding. The understanding of the numbers takes place in his mind, not in his fingers. A few years later his parents will buy him a pocket calculator. If the child thinks clearly then he will understand how his new battery-powered gizmo works in the same way his fingers once did: as a tool for facilitating and extending his own mathematical understanding. If the child cannot think clearly then he may find himself believing that something inside his calculator has a mental life capable of understanding mathematics. Presumably that conscious entity lives in the microchip. It goes away when the batteries die, but comes back if the boy plugs in the AC adapter. That little mind inside the microchip doesn't have much personality, but boy it sure is a whiz at doing math. -gts From gts_2000 at yahoo.com Sat Jan 16 02:02:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 15 Jan 2010 18:02:21 -0800 (PST) Subject: [ExI] Meaningless Symbols. Message-ID: <171918.49385.qm@web36504.mail.mud.yahoo.com> --- On Fri, 1/15/10, John Clark wrote: >> print "Hello World" > > The title of this thread is "Meaningless Symbols", if "print" was one of > those to the computer then it would not do exactly the same thing each > time it encountered that symbol, instead it would do some > arbitrary thing. Apparently the computer ascribed meaning to > at least one of those "meaningless symbols". You've stumbled close to the truth there, John. As I pointed out in my original message, "print" counts as a syntactic rule. Although it stretches the definition of "understanding" I can for the sake of argument agree that s/h systems mechanically understand syntax. They cannot however get semantics from their so-called understanding of syntax. Nor can humans get semantics from syntax, for that matter, and humans really do understand syntax. I mentioned also that the classic one-line "Hello World" program does not differ in any important philosophical way from the most sophisticated possible program. Someone made some scornful and ignorant comment about that. Let us say that we have a sophisticated program that behaves in every way like a human such that it passes the Turing test. We then add to that program the line 'print "Hello World"' (or perhaps 'speak "Hello World"') such that the command will execute at an appropriate time still consistent with passing the Turing test. That advanced program will not understand the meaning of "Hello World" any more than does the one line program running alone. S/H systems can do more than follow syntactic rules for crunching words and symbols. They have no way to attach meanings to the symbols or to understand those meanings. Those semantic functions belong to the humans who program and operate them. -gts From stathisp at gmail.com Sat Jan 16 02:13:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jan 2010 13:13:09 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <213685.55239.qm@web36502.mail.mud.yahoo.com> References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/16 Gordon Swobe : > --- On Thu, 1/14/10, Stathis Papaioannou wrote: > >> Presumably for the brain you don't believe the code or the >> algorithm implemented by neural networks firing gives rise >> to understanding, > > If the brain uses code or implements algorithms at all (and it probably does not) then it must do something else besides. The computationalist theory of mind simply fails to explain the facts. The primary visual cortex (V1) is isomorphic with the visual field. Certain neurons fire when you see a vertical line and different neurons fire when you see a horizontal line. The basic pattern information is passed on to deeper layers of the cortex, V2-V5, where it is processed further and gives rise to perception of more complex visual phenomena, such as perception of foreground moving on background and face recognition. The individual neurons follow relatively simple rules determining when they fire, but the network of neurons behaves in a complex way due to the complex interconnections. The purpose of the internal machinery of the neuron is to ensure that it behaves appropriately in response to input from other neurons. The important point here is that the neuron follows an algorithm which has no hint in it of visual perception. If you replaced parts of the neuron with artificial components that left the algorithm unchanged, the neuron would function normally and the subject's perception would be normal. It wouldn't matter what the artificial components were made of as long as the neuron behaved normally; and as discussed this is true with the strength of logical necessity, unless you are willing to entertain what I consider an incoherent notion of consciousness. Moreover, it is the pattern of interconnected neurons firing that is necessary and sufficient for the person's behaviour, so if consciousness is something over and above this it would seem to be completely superfluous. But if you still think that the consciousness of the brain resides in the actual matter of the neurons rather than their function, then you could consistently maintain that it resides in the matter of the modified neurons provided that they still functioned normally. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 16 02:42:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jan 2010 13:42:27 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <213685.55239.qm@web36502.mail.mud.yahoo.com> References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/16 Gordon Swobe : > I think that eventually neuroscience and the philosophy of mind will merge into one field -- that neuroscientists will come to see that they hold in their hands the answers to these questions of philosophy. When neuroscientists make a working model of a brain and claim that, since it behaves like a real brain it must also have the mind of a real brain, there will be the doubters. The neuroscientists will stamp their feet and point to their experimental results but the doubters will still doubt, as there is no possible empirical fact that will convince them. Therefore, it will by definition always remain a philosophical question. > It has already started if you look with open eyes: neuroscientists have produced antidepressant drugs that brighten mood, a quality of consciousness, and drugs that alleviate pain, another quality of consciousness, and so on and so on. The drugs can do this only by affecting the behaviour of neurons. What you claim is that it is possible to make a physical change to a neuron which leaves its behaviour unchanged but changes or eliminates the person's consciousness. -- Stathis Papaioannou From lacertilian at gmail.com Sat Jan 16 03:08:47 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 15 Jan 2010 19:08:47 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: Regarding antidepressants and other mind-altering drugs: I'd like to add that simply because we have a drug which *produces* happiness, this does not necessarily mean anyone actually understands why that is yet. Gordon and Stathis seem to be implying otherwise. A quick Google search for "history of antidepressants" turns up the following: http://web.grinnell.edu/courses/sst/f01/SST395-01/PublicPages/PerfectDrugs/Chris/history/index2.html Four sentences in, and sure enough the telltale term "accidental discovery" appears. No one knows what happens beyond the blood-brain barrier. There aren't even that many convincing *theories, *as far as I can tell, and even fewer subject to scientific testing. Incidentally, this is my first message to Extropy-Chat. I probably screwed up the format somehow. If any hardened Gmail users want to point out my inevitable mistakes, I'd be much obliged. On Fri, Jan 15, 2010 at 6:42 PM, Stathis Papaioannou wrote: > 2010/1/16 Gordon Swobe : > > > I think that eventually neuroscience and the philosophy of mind will > merge into one field -- that neuroscientists will come to see that they hold > in their hands the answers to these questions of philosophy. > > When neuroscientists make a working model of a brain and claim that, > since it behaves like a real brain it must also have the mind of a > real brain, there will be the doubters. The neuroscientists will stamp > their feet and point to their experimental results but the doubters > will still doubt, as there is no possible empirical fact that will > convince them. Therefore, it will by definition always remain a > philosophical question. > > > It has already started if you look with open eyes: neuroscientists have > produced antidepressant drugs that brighten mood, a quality of > consciousness, and drugs that alleviate pain, another quality of > consciousness, and so on and so on. > > The drugs can do this only by affecting the behaviour of neurons. What > you claim is that it is possible to make a physical change to a neuron > which leaves its behaviour unchanged but changes or eliminates the > person's consciousness. > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 16 06:13:02 2010 From: spike66 at att.net (spike) Date: Fri, 15 Jan 2010 22:13:02 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: <8F502A3F7FAE466AA1BE89EDD3419908@spike> On Behalf Of Spencer Campbell ... If any hardened Gmail users want to point out my inevitable mistakes, I'd be much obliged... Spencer Hardened Gmail users. {8^D Haaa I like that. Welcome Spencer! {8-] spike From pharos at gmail.com Sat Jan 16 10:24:08 2010 From: pharos at gmail.com (BillK) Date: Sat, 16 Jan 2010 10:24:08 +0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: On 1/16/10, Spencer Campbell wrote: > Incidentally, this is my first message to Extropy-Chat. I probably screwed > up the format somehow. If any hardened Gmail users want to point out my > inevitable mistakes, I'd be much obliged. > Hi Your mail was fine for other Gmail users. But, (there's always a but) :) the Mailman list archives and other mail systems prefer messages in Plain text (i.e.not HTML). It is recommended to trim the message you are replying to down to only the portion that you are commenting on and place your reply after those sentences. (This avoids messages that grow like Topsy, ever-increasing, until megabytes of unread data are being sent). Best wishes, BillK From stathisp at gmail.com Sat Jan 16 12:36:34 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jan 2010 23:36:34 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/16 Spencer Campbell : > Regarding antidepressants and other mind-altering drugs: I'd like to add > that simply because we have a drug which produces happiness, this does not > necessarily mean anyone actually understands why that is yet. Gordon and > Stathis seem to be implying otherwise. > A quick Google search for "history of antidepressants" turns up the > following: > http://web.grinnell.edu/courses/sst/f01/SST395-01/PublicPages/PerfectDrugs/Chris/history/index2.html > Four sentences in, and sure enough the telltale term "accidental discovery" > appears. No one knows what happens beyond the blood-brain barrier. There > aren't even that many convincing theories, as far as I can tell,?and even > fewer subject to scientific testing. We do know what most antidepressant drugs do, insofar as we know what receptors they affect and how exactly they affect them. What we don't know is why this should have an effect on mood. It's a similar story with other psychoactive drugs. The other point to make about antidepressants is that they don't work very well: they work in about 30% of patients who have clinical depression, and not at all in people who are simply unhappy. Clinical trials show a 60% efficacy, but the placebo has at least 30% efficacy in these trials. -- Stathis Papaioannou From nebathenemi at yahoo.co.uk Sat Jan 16 12:36:38 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sat, 16 Jan 2010 12:36:38 +0000 (GMT) Subject: [ExI] University degrees (in response to Emlyn) In-Reply-To: Message-ID: <889094.11821.qm@web27003.mail.ukl.yahoo.com> Sorry for not replying earlier in the week, I've had a tough time in a dead-end job (which will become relevant later). On tuesday, Emlyn wrote a long post about Jaron Lanier's new book, and Emlyn talked about modern technological upheavals. I was particularly interested by the following paragraph: "(Next on the chopping block: Universities, whose cash cow, the undergrad degree, will be replaced with cheap/free alternative, and scientific journals, which are much better suited by free online resources if the users can just escape the reputation network effect of the existing closed journals)" If only. Back in the time of Socrates, the finest Sophists could command 100 minas of gold for a course of education designed to broaden the mind of the city's finest young men. 60 minas = 1 talent, or 100 librum (pounds) of gold to those thinking in roman units, so 100 minas is 166 pounds of gold - maybe a million dollars in the current gold bubble, certainly a few hundred thousand for most of the past decade - makes an Ivy League education look cheap. Socrates taught his philosophy while drinking, and the price of admission was being able to keep up with the master's legendary wine consumption. Socrates was disdainful of charging to teach philosophy, but others charged what the market would bear. In our modern age, where colossal numbers of books are available and huge amounts of information online, can we educate ourselves easily? Or do we once again find that we only value what is paid for? The undergraduate degree itself may serve several purposes, including improving one's mental abilities, improving your career prospects, and preparing you for specific tasks such as research in a particular field. The first is hard to put a price on, but the second is a real stumbling block. Many jobs in well-paid or interesting fields now ask for degrees, preferably in a specific field. If we can provide a low-cost online alternative to the university-based degrees, will employers still value it the same or will prejudice place your new education at a lower level than the old-fashioned one? We already have distance-learning degrees, and some employers take them just fine and others are prejudiced. I'm wondering how well any new attempts to reshape higher education will work. Emlyn's post also talked about job losses due to technology, saying "The only real threatened jobs are where people are doing low value crap. Padding. High value stuff will remain." Well, my well-paid job doing reasonably high-value stuff in insurance disappeared with the start of our current economic problems back in 2007. I blew my savings while unemployed, and have done dead-end jobs (office temping, call-centre work) since because it beats welfare. I have a deep need to retrain and find a new career, but my old degree doesn't count for much. To get into many jobs paying more than the dead-end ones, I would need qualifications in a specific area. These qualifications cost money, but I can't afford to pay for more qualifications without getting into debt - unfortunately in a dead-end job it's hard to get a loan at a less-than-punishing rate right now. So, it seems our current system for keeping people fed and housed (or paid in some manner) and trying to harness their talents into work that keeps the country going (or "meaningful employment") is flawed. Also, the system for educating minds to do theoretical work isn't great - for theory work, you needs minds that understand the field, access to the information of what has already been done, and plenty of time. We could employ plenty of intelligent but otherwise underemployed people this way, but no-one's found a cheap enough way of imparting educations and offering access to journals. I could go on longer about this, but I need to get back to finding a less stressful source of above-welfare-level income. Tom From bbenzai at yahoo.com Sat Jan 16 13:03:02 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 16 Jan 2010 05:03:02 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <724270.2104.qm@web113618.mail.gq1.yahoo.com> Gordon Swobe > I mentioned also that the classic one-line "Hello World" > program does not differ in any important philosophical way > from the most sophisticated possible program. Someone made > some scornful and ignorant comment about that. 'Someone' said that it was one of the most ridiculous things they had ever heard. Scornful? certainly, and justifiably so. Ignorant? Only if you consider it ignorant to point out the ridiculousness of claiming that a hydrogen atom doesn't differ in any important way from a solar system, or that the operation of a spinal reflex doesn't differ in any important way from the functioning of a human brain. As has already been pointed out, putting enough simple things together often (and perhaps inevitably) results in a completely different, complex thing, with completely different properties, which as far as we can tell, are not predictable from the properties of the simple things. To claim that the complex thing does not differ in any important way from the simple thing is, I'll say it again, totally ridiculous. Ben Zaiboc From pharos at gmail.com Sat Jan 16 14:56:50 2010 From: pharos at gmail.com (BillK) Date: Sat, 16 Jan 2010 14:56:50 +0000 Subject: [ExI] University degrees (in response to Emlyn) In-Reply-To: <889094.11821.qm@web27003.mail.ukl.yahoo.com> References: <889094.11821.qm@web27003.mail.ukl.yahoo.com> Message-ID: On 1/16/10, Tom Nowell wrote: > So, it seems our current system for keeping people fed and housed > (or paid in some manner) and trying to harness their talents into work > that keeps the country going (or "meaningful employment") is flawed. > Also, the system for educating minds to do theoretical work isn't great > - for theory work, you needs minds that understand the field, access to > the information of what has already been done, and plenty of time. > We could employ plenty of intelligent but otherwise underemployed > people this way, but no-one's found a cheap enough way of imparting > educations and offering access to journals. > > You find yourself at the sharp end of the looting of the US capitalist system. Companies are stripped bare, then closed down or moved to China. Wealth and income is concentrated into a smaller and smaller percentage of the people. These super-wealthy few have now used their wealth to take over Congress and the Fed and are now looting the US treasury. Unemployment and food stamps for all is a minor side-effect. Obama was supposed to change all this, but he faces a bought Congress controlled by lobbyists. What are the US people to do? Voting the Republicans back in won't change anything. BillK From jonkc at bellsouth.net Sat Jan 16 17:06:52 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 16 Jan 2010 12:06:52 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B50B764.8010003@satx.rr.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> Message-ID: <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> On Jan 15, 2010, Damien Broderick wrote: > I just poured 3 cups of water into a 2 cup jar. Does the fact that it stopped accepting water after I'd put in 2 cups and overflowed the rest mean it *understands* 3>2? Yes. > Then I put a 1 foot rule next to a book and the 9 matched up with the top of the book. Did the rule *understand* how tall the book is? Yes. > Computer programs understand nothing more than that. So what? That's enough understanding to work with; embarrassingly too much actually. Gordon thinks that genuine understanding is a completely useless property for intelligent people or machines to have because they would continue to act in exactly the same way whether they have understanding or not. Apparently you believe the same thing; nevertheless for reasons never explained Evolution invented understanding long ago and even more bizarrely saw fit to retain it over hundreds of millions of years. Or at least that's what Gordon claims to have happened because on at least some occasions he says he understands things, you may have made similar assertions in the past. > This all reminds me of the behaviorist idiocy of the 1950s. Given the above you may not be in the strongest position to call anything idiocy. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Jan 16 17:23:17 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 16 Jan 2010 18:23:17 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <417975.19906.qm@web36503.mail.mud.yahoo.com> References: <580930c21001150927o3db292c5n7c5428669adc81e1@mail.gmail.com> <417975.19906.qm@web36503.mail.mud.yahoo.com> Message-ID: <580930c21001160923t448b791ch36874c715407842a@mail.gmail.com> 2010/1/16 Gordon Swobe : > If you watch a small child solve a math problem on his fingers then you will watch a fellow human use a simple calculator to facilitate and extend his mathematical understanding. The understanding of the numbers takes place in his mind, not in his fingers. Yes. And association of symbols such as x=3, y=5, then x+y=7 takes place in the CPU, not in the Cd-Rom drive. So? -- Stefano Vaj From stefano.vaj at gmail.com Sat Jan 16 17:27:28 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 16 Jan 2010 18:27:28 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <724270.2104.qm@web113618.mail.gq1.yahoo.com> References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> Message-ID: <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> 2010/1/16 Ben Zaiboc : > To claim that the complex thing does not differ in any important way from the simple thing is, I'll say it again, totally ridiculous. Yes and no. It appears that there is one single qualitative threshold (and a pretty low one...) as far as "complexity" is concerned. Beyond that, all conceivable degrees of complexity can be generated by systems having attained the required level, and all that changes is the performances in the completion of a given computation. See again a New Kind of Science. -- Stefano Vaj From gts_2000 at yahoo.com Sat Jan 16 17:41:32 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 16 Jan 2010 09:41:32 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <554821.71139.qm@web36502.mail.mud.yahoo.com> --- On Fri, 1/15/10, Stathis Papaioannou wrote: > The drugs can do this only by affecting the behaviour of > neurons. What you claim is that it is possible to make a physical > change to a neuron which leaves its behaviour unchanged but changes or > eliminates the person's consciousness. You keep assigning absolute atomic status to neurons and their behaviors, forgetting that just as the brain is made of neurons, neurons are made of objects too. Those intra-neuronal objects have as much right to claim atomic status as does the neuron, and larger inter-neuronal structures can also make that claim. And on top of that you assume that digital simulations of whatever structures you arbitrarily designate as atomic will in fact work exactly like the supposed atomic structures you hope to simulate -- which presupposes that the brain in actual fact exists as a digital computer. -gts From jonkc at bellsouth.net Sat Jan 16 17:53:50 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 16 Jan 2010 12:53:50 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <171918.49385.qm@web36504.mail.mud.yahoo.com> References: <171918.49385.qm@web36504.mail.mud.yahoo.com> Message-ID: On Jan 15, 2010, at 9:02 PM, Gordon Swobe wrote: > > As I pointed out in my original message, "print" counts as a syntactic rule. Putting things in inflexible little boxes called "syntax" and "semantics" is an entirely human invention, nature doesn't make any such rigid distinctions. In the DNA code CAU means put the amino acid Histidine right here, and I don't care if thats syntax or semantics it created you. > Although it stretches the definition of "understanding" I can for the sake of argument agree that s/h systems mechanically understand syntax. They cannot however get semantics from their so-called understanding of syntax. You have been saying that for over a month now, you have found many new ways to express the same statement but you have yet to give us one reason to think it is true, and you have ignored the many reasons offered to think it is not true. > Let us say that we have a sophisticated program that behaves in every way like a human such that it passes the Turing test. We then add to that program the line 'print "Hello World"' (or perhaps 'speak "Hello World"') such that the command will execute at an appropriate time still consistent with passing the Turing test. That advanced program will not understand the meaning of "Hello World" any more than does the one line program running alone. You have been saying that for over a month now, you have found many new ways to express the same statement but you have yet to give us one reason to think it is true, and you have ignored the many reasons offered to think it is not true. > Nor can humans get semantics from syntax, for that matter, and humans really do understand syntax. What a remarkably silly thing to say! If that were true why would people read books? Why would they even talk to each other? > S/H systems can do more than follow syntactic rules for crunching words and symbols. They have no way to attach meanings to the symbols or to understand those meanings. Those semantic functions belong to the humans who program and operate them. I wish you'd stop dancing around and just say what you believe, humans have something that is not information (software) or matter (hardware), humans have a soul. I don't believe in the soul. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at m056832107.syzygy.com Sat Jan 16 18:03:32 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 16 Jan 2010 18:03:32 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> Message-ID: <20100116180332.5.qmail@syzygy.com> John Clark writes: > Gordon thinks that genuine understanding is a completely >useless property for intelligent people or machines to have because they >would continue to act in exactly the same way whether they have >understanding or not. I'm not at all sure that's what Gordon thinks, although it is difficult to tell for sure. A claim he's made several times, but which seems to have mostly slipped by unnoticed, is that a program controlled neuron cannot be made to behave the same as a biological one. In discussing the partial replacement thought experiment he says that the surgeon will replace the initial set of neurons and find that they don't produce the desired behavior in the patient, so he has to go back and tweak things again. Everyone else seems to think Gordon means that tweaking is in the programming, and that eventually the surgeon manages to get the program right. He's actually said that the surgeon will need to go in and replace more and more of the patient's brain in order to get the patient to pass the Turing test, and that the extensive removal of biological neurons is what turns the patient into a zombie. Since Gordon also claims that neurons are computable, this seems to me to be a contradiction in his position. My guess at his response to this would be: Sure, neurons may be computable, but we don't CURRENTLY know enough about how they work to duplicate their behavior well enough to support consciousness. My reply to that would be: by the time we can make replacement neurons we will be very likely to know how they work in sufficient detail. In fact, we currently know a good deal about how they work. What we're missing is the wiring pattern. I'm going to also guess that Gordon thinks the thing we don't currently know how to do in making a programmatic neuron is to derive semantics from syntax. I think I remember him saying he believes this to eventually be possible, but that we currently have no clue how. So, Gordon seems to think that consciousness is apparent in behavior, and thus selectable by evolution. I think that's why he's not interested in your line of argument. Gordon: did I represent your position accurately here? -eric From aware at awareresearch.com Sat Jan 16 18:03:56 2010 From: aware at awareresearch.com (Aware) Date: Sat, 16 Jan 2010 10:03:56 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> Message-ID: On Sat, Jan 16, 2010 at 9:27 AM, Stefano Vaj wrote: > 2010/1/16 Ben Zaiboc : >> To claim that the complex thing does not differ in any important way from the simple thing is, I'll say it again, totally ridiculous. > > Yes and no. It appears that there is one single qualitative threshold > (and a pretty low one...) as far as "complexity" is concerned. Beyond > that, all conceivable degrees of complexity can be generated by > systems having attained the required level, and all that changes is > the performances in the completion of a given computation. See again a > New Kind of Science. Stefano, you are correct and Wolfram makes an important point about "computational irreducibility" in regard to our ability to predict the behavior of complex systems. Beyond a certain point, the only way to know is to run the system and observe the outcomes. But this does not imply that novel qualitative differences do not continue to emerge as the (4th Law of Thermodynamics?) result of stochastic discovery of synergies exploiting new degrees of freedom. More is indeed different. And all of this has virtually zero bearing on the exceedingly simple but excruciatingly nonintuitive *epistemological* puzzle of [meaning|semantics|consciousness|qualia|experience|intentionality]. - Jef From aware at awareresearch.com Sat Jan 16 18:37:32 2010 From: aware at awareresearch.com (Aware) Date: Sat, 16 Jan 2010 10:37:32 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> Message-ID: This discussion shares much in common with PHIL101-type bantering common in college dorms--less the wine, beer and marijuana. If you want to gain some traction, might I suggest the following? I've given it to others with useful effect and I think I've posted it here before. If the purpose of this discussion is to increase understanding, rather than just to be right (John, are you listening?) then you should at least be familiar with the thinking presented by John Pollack in this paper. "So you think you exist? In defense of nolipsism." Coauthored with Jenann Ismael. In Knowlege and Reality: Essays in Honor of Alvin Plantinga (Kluwer), eds. Thomas Crisp, Matthew Davidson, David Vander Laan. Springer Verlag, 2004. "Human beings think of themselves in terms of a privileged non-descriptive designator ? a mental "I". Such thoughts are called "de se" thoughts. The mind/body problem is the problem of deciding what kind of thing I am, and it can be regarded as arising from the fact that we think of ourselves non-descriptively. Why do we think of ourselves in this way? We investigate the functional role of "I" (and also "here" and "now") in cognition, arguing that the use of such non-descriptive "reflexive" designators is essential for making sophisticated cognition work in a general-purpose cognitive agent. If we were to build a robot capable of similar cognitive tasks as humans, it would have to be equipped with such designators. Once we understand the functional role of reflexive designators in cognition, we will see that to make cognition work properly, an agent must use a de se designator in specific ways in its reasoning. Rather simple arguments based upon how "I" works in reasoning lead to the conclusion that it cannot designate the body or part of the body. If it designates anything, it must be something non-physical. However, for the purpose of making the reasoning work correctly, it makes no difference whether "I" actually designates anything. If we were to build a robot that more or less duplicated human cognition, we would not have to equip it with anything for "I" to designate, and general physicalist inclinations suggest that there would be nothing for ?I? to designate in the robot. In particular, it cannot designate the physical contraption. So the robot would believe "I exist", but it would be wrong. Why should we think we are any different?" - Jef From jonkc at bellsouth.net Sat Jan 16 18:12:15 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 16 Jan 2010 13:12:15 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100116180332.5.qmail@syzygy.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> <20100116180332.5.qmail@syzygy.com> Message-ID: On Jan 16, 2010, Eric Messick wrote: > Gordon seems to think that consciousness is apparent in behavior, > and thus selectable by evolution If so then the Turing Test works. Gordon wants it both ways but he can't have it both ways. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Jan 16 18:39:56 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 16 Jan 2010 10:39:56 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100115182313.5.qmail@syzygy.com> Message-ID: <721220.3038.qm@web36506.mail.mud.yahoo.com> --- On Fri, 1/15/10, Eric Messick wrote: > My computer does understand numbers, though. You have an extraordinary computer. My computer seems pretty dumb and boring compared to yours. It treats numbers just as it does other kinds of symbols; that is, it blindly manipulates them according to syntactic rules specified by the programmer. In the case of my pocket calculator, those instructions are hard-coded into the chip by an engineer. When my dumb calculator answers a difficult mathematical question, I feel grateful not for its imaginary understanding of numbers, but rather for the real understanding of the engineer who designed it. -gts From stefano.vaj at gmail.com Sat Jan 16 19:36:52 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 16 Jan 2010 20:36:52 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> Message-ID: <580930c21001161136t36e5efb3x4047c1ffcc6c6fdb@mail.gmail.com> 2010/1/16 Aware : > Stefano, you are correct and Wolfram makes an important point about > "computational irreducibility" in regard to our ability to predict the > behavior of complex systems. ?Beyond a certain point, the only way to > know is to run the system and observe the outcomes. Yes, I mentioned this point another time with regard to the "computability" issue. But here the subject is different, and it regards the fact that beyond the threshold necesary for universal computation-able systems, which are in principle able to generate or emulate arbitrary degrees of complexity, there is no other "quantum leap". If there were, at least for practical purposes, a good candidate would be quantum computers. But organic brains, besides operating at a wholly different scale, do not exhibit any ability to deal with the kind of probs where quantum computers are expected to make a difference. -- Stefano Vaj From aware at awareresearch.com Sat Jan 16 19:46:04 2010 From: aware at awareresearch.com (Aware) Date: Sat, 16 Jan 2010 11:46:04 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: <580930c21001161136t36e5efb3x4047c1ffcc6c6fdb@mail.gmail.com> References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> <580930c21001161136t36e5efb3x4047c1ffcc6c6fdb@mail.gmail.com> Message-ID: On Sat, Jan 16, 2010 at 11:36 AM, Stefano Vaj wrote: > 2010/1/16 Aware : >> Stefano, you are correct and Wolfram makes an important point about >> "computational irreducibility" in regard to our ability to predict the >> behavior of complex systems. ?Beyond a certain point, the only way to >> know is to run the system and observe the outcomes. > > Yes, I mentioned this point another time with regard to the > "computability" issue. But ?here the subject is different, and it > regards the fact that beyond the threshold necesary for universal > computation-able systems, which are in principle able to generate or > emulate arbitrary degrees of complexity, there is no other "quantum > leap". Agreed, if you'll make that "no other quantum leap apparent." - Jef From gts_2000 at yahoo.com Sat Jan 16 20:10:57 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 16 Jan 2010 12:10:57 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100116180332.5.qmail@syzygy.com> Message-ID: <744923.52636.qm@web36503.mail.mud.yahoo.com> --- On Sat, 1/16/10, Eric Messick wrote: > I'm not at all sure that's what Gordon thinks, although it > is difficult to tell for sure. In a nutshell: the human brain/mind has capabilities that software/hardware systems do not and cannot have. Ergo, we cannot duplicate brains on s/h systems; strong AI is false. > In discussing the partial replacement thought experiment he > says that the surgeon will replace the initial set of neurons and > find that they don't produce the desired behavior in the patient, so he > has to go back and tweak things again. I believe experience affects behavior including neuronal behavior. This means the surgeon/programmer of programmatic neurons in the experiment faces an exceedingly difficult if not impossible challenge even in creating weak AI in his patient. He cannot anticipate what kinds of experiences his patient will have after leaving the hospital, but he must program his patient not only to respond appropriately to those experiences but also to change his subsequent behavior appropriately. > Everyone else seems to think Gordon means that tweaking is > in that programming, and that eventually the surgeon manages to get > the program right.? He's actually said that the surgeon > will need to go in and replace more and more of the patient's brain > in order to get the patient to pass the Turing test, and that the > extensive removal of biological neurons is what turns the patient into a > zombie. The patient arrived at the hospital already a near zombie, suffering from a complete receptive aphasia -- a complete inability to understand words -- due to damage to Wernicke's area in his brain. I consider it unclear whether he can survive the operation without losing what little sense of self he might have left. Unclear that he even has a sense of self before the operation. Again he presents with no understanding of words, presumably not even the words "I" and "me". > Since Gordon also claims that neurons are computable, this > seems to me to be a contradiction in his position. I allow that most everything in the world including the brain lends itself to computation. But this fact means nothing. A computational description of a thing amounts to nothing more than a description of the thing, and descriptions of things do not equal the things they describe. > I'm going to also guess that Gordon thinks the thing we > don't currently know how to do in making a programmatic neuron > is to derive semantics from syntax.? I think I remember him saying > he believes this to eventually be possible, but that we currently have > no clue how. No, I deny that formal programs can have or cause semantics. > So, Gordon seems to think that consciousness is apparent in > behavior, Not sure what you mean by apparent, but I do not believe we can prove an entity has consciousness from its behavior. It takes a philosophical argument. > Gordon:? did I represent your position accurately > here? See above. Thanks for joining in by the way. Lots of messages in this thread and I don't always have time to answer even those addressed to me. -gts From thespike at satx.rr.com Sat Jan 16 20:27:10 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 16 Jan 2010 14:27:10 -0600 Subject: [ExI] Meaningless Symbols. In-Reply-To: <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> Message-ID: <4B52211E.8080002@satx.rr.com> On 1/16/2010 11:06 AM, John Clark wrote: >> I just poured 3 cups of water into a 2 cup jar. Does the fact that it >> stopped accepting water after I'd put in 2 cups and overflowed the >> rest mean it *understands* 3>2? > Yes. > >> Then I put a 1 foot rule next to a book and the 9 matched up with the >> top of the book. Did the rule *understand* how tall the book is? > Yes. > Given the above you may not be in the strongest position to call > anything idiocy. I... see... From lacertilian at gmail.com Sat Jan 16 16:54:23 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 16 Jan 2010 08:54:23 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010-01-16 BillK : >But, (there's always a but) ? :) >the Mailman list archives and other mail systems prefer messages in >Plain text (i.e.not HTML). I was so careful not to use any italics or special quote formatting or other frivolity! Today I realize: I could have just hit the Plain Text toggle. This has now been done. 2010-01-16 Stathis Papaioannou : >We do know what most antidepressant drugs do, insofar as we know what >receptors they affect and how exactly they affect them. What we don't >know is why this should have an effect on mood. It's a similar story >with other psychoactive drugs. Right; I should have been more specific. Neurons themselves aren't that mysterious. I wouldn't say we know "exactly" what most of these peculiar little chemicals are physically doing, but certainly we know "roughly". Multiply the completeness of our knowledge about drug-neuron interactions by the completeness of our knowledge about neuron-neuron (or neuron-mind, if you prefer) interactions, of course, and the resulting figure will be more than a little humbling. Afferent-Neuron Alice becomes inexplicably stimulated by the sensory system. Alice shoots electricity through Interneuron Bill to Efferent-Neuron Carl. Carl becomes stimulated for a perfectly good reason. In response, Carl shoots more electricity through four other interneurons. The signal cascades through a bewildering, tortuous pattern of interconnections, roughly localized to a roughly known part of the brain, causing you to drop your ice cream. Aww. A sidenote: if I remember correctly, depression causes hippocampus shrinkage. Antidepressants cause the hippocampus to regrow, or regrow faster. This was only discovered very recently. I doubt I have to stress how very baffling it is, but I will anyway: it is very baffling. From lacertilian at gmail.com Sat Jan 16 17:34:03 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 16 Jan 2010 09:34:03 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: <724270.2104.qm@web113618.mail.gq1.yahoo.com> References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> Message-ID: On Sat, Jan 16, 2010 at 5:03 AM, Ben Zaiboc wrote: > To claim that the complex thing does not differ in any important way from the simple thing is, I'll say it again, totally ridiculous. (Stefano responded to the very same thing shortly before I could finish this. But I like my version better anyway! Click. Send.) Gordon was careful to specify: "any important PHILOSOPHICAL way". Philosophy is notoriously vague and broad, with a marked proclivity for blurred distinctions. I haven't found much to agree with in what Gordon Swobe has written since my admittedly-recent subscription, but here I find myself firmly on his side. A solar system doesn't differ in any important philosophical way from a hydrogen atom. I almost convinced myself that it did, but it doesn't: both are held together by about four fundamental forces in varying ratios, both can be divided into smaller parts, both are observable in physical reality, both are composed of... wait! There is a distinction! Solar systems have more distinct species of particle than hydrogen atoms. They're loaded with neutrinos, strange/charm/top/bottom quarks, and dark matter. That's a basic ontological difference by my book, and all ontological differences are philosophical differences. But, "hello world" is not different from, say, Firefox. An obvious counter-argument would point out that Firefox can connect to the Internet, and engages in a complex interaction with at least one human being. But to the computer, people are just random number generators. Accuse me of the pathetic fallacy at leisure; it is just too convenient as shorthand to avoid. You are a computer. Let's say you run Linux. The particular distribution is unimportant. To you, there is no important philosophical difference between humans and /dev/urandom, nor between the hard drive and the Internet. "But the Internet changes independent of CPU activity", cries the devil's advocate! To which I say: how do you know? Maybe the hard drive just changes more slowly. I could bring cosmic radiation into the mix here, but I won't. It's a cheap shot. The fact is, you can check the same file as many times as you want, but you can never prove that it will still be the same file the next time you check it. There are plenty of practical differences between the very simple and the very complex, but if there are philosophical differences I haven't found them. Complexity is just another scalar, no more philosophically pithy than volume or velocity. Unless you take the directional component of velocity into account. Then it's a vector, as far removed from the lowly scalar as hypothetical man is from hypothetical God. From lacertilian at gmail.com Sat Jan 16 18:43:17 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 16 Jan 2010 10:43:17 -0800 Subject: [ExI] University degrees (in response to Emlyn) In-Reply-To: <889094.11821.qm@web27003.mail.ukl.yahoo.com> References: <889094.11821.qm@web27003.mail.ukl.yahoo.com> Message-ID: 2010-01-16 Tom Nowell : > In our modern age, where colossal numbers of books are available and huge amounts of information online, can we educate ourselves easily? Or do we once again find that we only value what is paid for? It's a false dichotomy. Unless you mean the value one places on one's own education, but that isn't the impression I get. There seem to be two questions embedded and entangled within this one: "Is it possible right now to keep educational pace with students autodidactically?" and "Is it possible right now to keep academic pace with students autodidactically?" This is probably a confusing distinction for most people. Schooling is not education, and education is not schooling. I thought that way before I read John Holt, and I think that way all the more now. As my academic career consists of all of three days of preschool, you can safely consider me biased in this area. So, for what it's worth: I can run circles around your average college or university student when it comes to just about any subject except the one they're cramming at the moment. Of course people can educate themselves, given the resources. The only question is whether or not that education counts. It's an absurd and despicable question, but I won't deny that it's valid. I have no credentials whatsoever. If I want a good job, I have to create it myself from scratch. It's either that or prove my competence to someone in a position to get me into a position to put someone into a position to get me into a position to... Sorry, infinite recursion. It's either that or I have to ingratiate myself to some higher-ups. A very touch-and-go, needlessly dehumanizing prospect. I would much rather devote my time toward giving the local green party enough intellectual muscle to uproot the economy for 10 km in every direction. Afterward, maybe more delicate species such as myself will have a chance. From gts_2000 at yahoo.com Sat Jan 16 21:10:23 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 16 Jan 2010 13:10:23 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <580930c21001160923t448b791ch36874c715407842a@mail.gmail.com> Message-ID: <701430.26678.qm@web36508.mail.mud.yahoo.com> --- On Sat, 1/16/10, Stefano Vaj wrote: >> If you watch a small child solve a math problem on his >> fingers then you will watch a fellow human use a simple >> calculator to facilitate and extend his mathematical >> understanding. The understanding of the numbers takes place >> in his mind, not in his fingers. > > Yes. And association of symbols such as x=3, y=5, then > x+y=7 takes place in the CPU, not in the Cd-Rom drive. I think x+y=8, but anyway... > So? So.... you understand a computation took place in your CPU, not unlike the child who added 3 fingers to 5 and got 8. Do you think your CPU understood it? If you do then I wonder if you also think the child's fingers understood it. -gts From thespike at satx.rr.com Sat Jan 16 21:22:35 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 16 Jan 2010 15:22:35 -0600 Subject: [ExI] Meaningless Symbols. In-Reply-To: <701430.26678.qm@web36508.mail.mud.yahoo.com> References: <701430.26678.qm@web36508.mail.mud.yahoo.com> Message-ID: <4B522E1B.10207@satx.rr.com> On 1/16/2010 3:10 PM, Gordon Swobe wrote: > So.... you understand a computation took place in your CPU, not unlike the child who added 3 fingers to 5 and got 8. > > Do you think your CPU understood it? If you do then I wonder if you also think the child's fingers understood it. But John Clark has just stated unequivocally that *he* at least thinks so. Yep, the fingers *understood* the calculation. The cloud *understands* how to rain. I suppose we come to a point here where the use of words has broken down. Or perhaps the use of concepts. No point continuing the discussion. Damien Broderick From gts_2000 at yahoo.com Sat Jan 16 21:36:23 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 16 Jan 2010 13:36:23 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <315789.42273.qm@web36505.mail.mud.yahoo.com> > To claim that the complex thing does not differ in > any important way from the simple thing is, I'll say it > again, totally ridiculous. My claim is that the complex program does not have anything at the moment it executes print "Hello World"; that the simple one-line program alone does not have. If anything is ridiculous, it's the notion that either the simple or the complex versions of the program will cause a conscious entity to pop into the world to say hello. -gts From eric at m056832107.syzygy.com Sat Jan 16 21:40:13 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 16 Jan 2010 21:40:13 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <744923.52636.qm@web36503.mail.mud.yahoo.com> References: <20100116180332.5.qmail@syzygy.com> <744923.52636.qm@web36503.mail.mud.yahoo.com> Message-ID: <20100116214013.5.qmail@syzygy.com> Gordon writes: >In a nutshell: the human brain/mind has capabilities that > software/hardware systems do not and cannot have. Ergo, we cannot > duplicate brains on s/h systems; strong AI is false. and, later: >I allow that most everything in the world including the brain lends > itself to computation. But this fact means nothing. A computational > description of a thing amounts to nothing more than a description of > the thing, and descriptions of things do not equal the things they > describe. Would you say that "description != thing" is the reason computer systems cannot replicate the capability of brains to understand? In other words, if we could make a simulation of water that actually included wetness, would we also be able to write a program that was conscious of that wetness? >I believe experience affects behavior including neuronal > behavior. This means the surgeon/programmer of programmatic neurons > in the experiment faces an exceedingly difficult if not impossible > challenge even in creating weak AI in his patient. He cannot > anticipate what kinds of experiences his patient will have after > leaving the hospital, but he must program his patient not only to > respond appropriately to those experiences but also to change his > subsequent behavior appropriately. One of the primary behaviors of neurons is to change their response to signals over time. The basic way this happens is well characterized. Any programmatic neuron would be coded to change in the same manner. The mechanism is not all that complicated. It is also the fundamental mechanism behind learning, and the way in which experience alters future behavior. Have you studied the molecular pathways that mediate these changes? Do you have any reason to think this type of change would be difficult to program? >No, I deny that formal programs can have or cause semantics. I think you have a very different meaning for the word "semantics" than most of the rest of us engaging in this discussion. I suspect that this difference also stems from "description != thing". >> So, Gordon seems to think that consciousness is apparent in >> behavior, > >Not sure what you mean by apparent, but I do not believe we can prove > an entity has consciousness from its behavior. It takes a > philosophical argument. By apparent, I mean that an individual who is capable of consciousness will behave differently from one who is incapable. That difference in behavior is something that evolution could select for. Essentially, consciousness makes you more fit in some way. That doesn't necessarily mean that we can deduce the existence of consciousness based on any specific trait. -eric From pharos at gmail.com Sat Jan 16 22:27:58 2010 From: pharos at gmail.com (BillK) Date: Sat, 16 Jan 2010 22:27:58 +0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <744923.52636.qm@web36503.mail.mud.yahoo.com> References: <20100116180332.5.qmail@syzygy.com> <744923.52636.qm@web36503.mail.mud.yahoo.com> Message-ID: On 1/16/10, Gordon Swobe wrote: > In a nutshell: the human brain/mind has capabilities that software/hardware > systems do not and cannot have. Ergo, we cannot duplicate brains on s/h > systems; strong AI is false. > > Gordon and John will probably be interested in this company. It seems to be providing a very useful service. Is your life weighing you down? Lighten up! If your soul is keeping you from finding the happiness you deserve, there is an easy, painless and safe solution. Remove it, store it, and walk away a happier person. The Soul Storage Company's patented De-Souling? technique actually allows its clients to have their souls removed, whether permanently or just for a little "mental vacation." After a simple and painless outpatient procedure, you will walk out of our doors unburdened, leaving your soul behind in our secure, state-of-the-art storage facilities. -------------------- BillK From gts_2000 at yahoo.com Sat Jan 16 22:30:01 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 16 Jan 2010 14:30:01 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100116214013.5.qmail@syzygy.com> Message-ID: <267629.67426.qm@web36508.mail.mud.yahoo.com> --- On Sat, 1/16/10, Eric Messick wrote: > Would you say that "description != thing" is the reason > computer systems cannot replicate the capability of brains to > understand? In a general sense, yes. I think S/H systems can simulate understanding as in weak AI but that they cannot have conscious understanding as in strong AI. And conscious understanding affects behavior including the behavior of those neurons associated with understanding, making weak AI itself a formidable challenge. > In other words, if we could make a simulation of water that > actually included wetness, would we also be able to write a program > that was conscious of that wetness? If you bring to me a simulation of water that "actually includes wetness" then I will ask why you call it a simulation of water. Sure look like real water to me. And no I don't think s/h systems can have first-person consciousness of that real quality of wetness, of what you might call the quale of wetness. > Have you studied the molecular pathways that mediate these > changes? I've already assumed the programmer knows everything possible about the biology of experience. > I think you have a very different meaning for the word > "semantics" In the broadest sense I mean the having of conscious experience of any kind, but we concern ourselves here mainly with the kinds of mental contents most closely associated with intelligence, e.g., the experience of understanding words and numbers. > By apparent, I mean that an individual who is capable of > consciousness will behave differently from one who is incapable.? > That difference in behavior is something that evolution could select > for.? Essentially, consciousness makes you more fit in some way.? That > doesn't necessarily mean that we can deduce the existence of > consciousness based on any specific trait. I agree that evolution selected conscious experience, but perhaps only because nature found it more economical than the alternative. -gts From lacertilian at gmail.com Sat Jan 16 22:46:59 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 16 Jan 2010 14:46:59 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> Message-ID: 2010-01-16 Aware : > This discussion shares much in common with PHIL101-type bantering > common in college dorms--less the wine, beer and marijuana. > > If you want to gain some traction, might I suggest the following? > I've given it to others with useful effect and I think I've posted it > here before. ?If the purpose of this discussion is to increase > understanding, rather than just to be right (John, are you listening?) > then you should at least be familiar with the thinking presented by > John Pollack in this paper. How do you know? Everyone on the Internet is perpetually trapped in a quantum superposition of drunk, high and sober. Now that I think about it, that explains a lot. Twenty-nine pages later, I feel confident in saying something with a little more substance. But only a little. It took me a while to relate it to the current subject, but then I came up with this: "Does the symbol 'I' have meaning?". Looking at it that way, I'm not so sure the concept of meaning is meaningful. Two definitions are competing for sovereignty in my head: "A symbol is meaningful if it has a referent." or "A symbol is meaningful if it is part of a consistent function." Interpreting the first definition is easy. The second definition hearkens back to an argument made here yesterday. 2010-01-15 Eric Messick : > The fact that [a computer] comes up with the correct answers for the following: > > ?0 + 0 = 0 > ?1 + 1 = 2 > > indicates that it understands the fundamental difference between the numbers zero and one. In this case, '0', '1', and '+' are the symbols. Pay no mind to '=' and '2', for they are there only for our convenience. It seems to be inherent in the meaning of these symbols that they can be put together in one-dimensional sequence, and that any sequence of the symbols is itself a symbol. Some of the resulting compounds are meaningful, and some aren't. For example: '0 + 1' is meaningful. '0 1 +' is not. Unless we're using reverse-polish notation, which we should, but aren't. It's also noteworthy that there are more meaningful symbols than there are meanings. '0 + 1' is a different symbol from '1' by itself, but, obviously, they share precisely the same meaning. The only question is, are the meanings in the computer, or in our heads? I'm playing catch-up, here, so please excuse the redundancy. It seems to me that the question is, itself, meaningless. If I take John Pollack's paper as an axiomatic starting point, it seems readily apparent that 'I' has meaning by the same definition as '1', that is, it plays a significant role in forming meaningful compound symbols. 'I am human' means something very different from 'You are human', where 'human', 'am' and 'are' are symbols too. 'am' has the same meaning as 'are', though, again, it is obviously a different symbol. The grammatically astute will have a few more fly off the top of their heads right now. So, back to the present. 'I' has meaning to us. Therefore, if a machine can be built which uses 'I' in exactly the same way, both in conversation and within its own c-thoughts, 'I' will have meaning to that machine. Anyone have a soldering iron handy? From bbenzai at yahoo.com Sat Jan 16 23:27:51 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 16 Jan 2010 15:27:51 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <969914.98305.qm@web113608.mail.gq1.yahoo.com> Gordon Swobe claimed: > you assume that digital simulations of whatever structures you > arbitrarily designate as atomic will in fact > work exactly like the supposed atomic structures you hope to simulate -- > which presupposes that the brain in actual fact exists as a digital > computer. This is just not true. You're saying that only digital computers can be simulated by digital computers. It's trivially obvious that this can't be true. We routinely simulate many processes that aren't in themselves digital (let alone digital computers), on digital computers. Your idea of simulation seems to be a very simplistic one, which ignores that there are many levels of reality, from the subatomic to the mental models that we create, and perhaps more beyond that. If I want to create a model of traffic flow, for instance, I could try to do it at a very low-resolution level with variables that represent groups of vehicles with something in common, of I could create a higher resolution model by representing individual cars, or I could go much higher, and create a model that captures things like weather conditions affecting the road surfaces, psychological states of drivers, mechanical differences between different vehicles, etc., etc. I doubt if anyone would argue that this highly detailed simulation would not be a good representation of real-life traffic, and of course it would be run in a digital computer. Would you say that the traffic itself therefore must exist as a digital computer? Just because the model works exactly like the traffic it simulates? Ben Zaiboc From bbenzai at yahoo.com Sun Jan 17 00:04:18 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 16 Jan 2010 16:04:18 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <594404.26085.qm@web113613.mail.gq1.yahoo.com> Gordon Swobe wrote: > > My claim is that the complex program does not have anything > at the moment it executes > > print "Hello World"; > > that the simple one-line program alone does not have. Which is equivalent to saying that the brain does not have anything at the moment it executes [instruction to move leg] that a simple spinal reflex to move the leg does not have. > > If anything is ridiculous, it's the notion that either the > simple or the complex versions of the program will cause a > conscious entity to pop into the world to say hello. And here we have the crux of the whole thing. This 'conscious entity that pops into the world'. What is that, exactly? The ridiculous notion is that it makes sense to equate a single piece of gravel to the entire global network of roads, or a metal tube to a fleet of jumbo jets. A simple circuit or it's equivalent as a program instruction doesn't have the potential for creating and maintaining complex models with complex relationships between them, but a few million of them working together do. Do you think that a spinal reflex loop is conscious? Even a tiny bit? Ben Zaiboc From stathisp at gmail.com Sun Jan 17 01:14:00 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jan 2010 12:14:00 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <554821.71139.qm@web36502.mail.mud.yahoo.com> References: <554821.71139.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/17 Gordon Swobe : > --- On Fri, 1/15/10, Stathis Papaioannou wrote: > >> The drugs can do this only by affecting the behaviour of >> neurons. What you claim is that it is possible to make a physical >> change to a neuron which leaves its behaviour unchanged but changes or >> eliminates the person's consciousness. > > You keep assigning absolute atomic status to neurons and their behaviors, forgetting that just as the brain is made of neurons, neurons are made of objects too. Those intra-neuronal objects have as much right to claim atomic status as does the neuron, and larger inter-neuronal structures can also make that claim. Everything I've said applies equally well if you consider simulating the behaviour of subneuronal or multineuronal structures. Neurons are just a convenient unit to work with. > And on top of that you assume that digital simulations of whatever structures you arbitrarily designate as atomic will in fact work exactly like the supposed atomic structures you hope to simulate -- which presupposes that the brain in actual fact exists as a digital computer. It presupposes that the brain's processes can be described algorithmically. This is probably true but not certainly true. If it is true, then it is possible to make a computerised brain that acts exactly like a biological brain and has exactly the same consciousness as the biological brain. As I've explained several times, to deny this last statement leads to self-contradiction. I think you have understood this because as Eric has also pointed out, in the partial brain replacement thought experiment you claim that the patient *won't* behave normally and the surgeon will have to tweak the rest of his brain to make him pass as normal. But of course that is saying that the artificial neurons were not zombie neurons to begin with, since a zombie neuron by definition behaves exactly the same as a biological neuron. So if you want to maintain that computers can't be conscious you are forced to agree that the brain is not computable, and hence that zombies and weak AI are not possible. I should add that while this thought experiment does not prove computationalism (since there is the possibility that the brain is not computable) it does prove functionalism, of which computationalism is a subset. That is, it proves that you cannot separate consciousness from intelligent behaviour, or equivalently that consciousness cannot be due to some essential substance or process in the brain. For otherwise zombified brain components would be conceptually possible, leading as before to logical contradiction. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 17 04:00:23 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jan 2010 15:00:23 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <267629.67426.qm@web36508.mail.mud.yahoo.com> References: <20100116214013.5.qmail@syzygy.com> <267629.67426.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/17 Gordon Swobe : > --- On Sat, 1/16/10, Eric Messick wrote: > >> Would you say that "description != thing" is the reason >> computer systems cannot replicate the capability of brains to >> understand? > > In a general sense, yes. I think S/H systems can simulate understanding as in weak AI but that they cannot have conscious understanding as in strong AI. > > And conscious understanding affects behavior including the behavior of those neurons associated with understanding, making weak AI itself a formidable challenge. Let's look at what it means to say that conscious understanding affects behaviour. Suppose you are deciding between buying full cream or low fat milk. There are many considerations: the possible differential effects on your health, what you will do with the milk and the likely difference in taste according to application, the preferences of anyone else you are going to share the milk with, and so on. What seems a trivial task requires quite a deep understanding of the world with analysis of your own preferences and the possible consequences of your behaviour. But if I look inside your head I don't see any of this internal debate. What I see is a complex dance of electrical impulses across neurons, following mechanistic rules, and resulting in a signal sent to the muscles in your arm so that you reach out and pick the full cream milk. Did your understanding affect your decision? In one sense, obviously yes; but from the external observer's point of view your brain was just doing what it had to blindly following the laws of physics. If your understanding had been different your decision may have been different, but your understanding could *only* have been different if the physical activity in your brain had been different, and a full account of the physical effects is enough to explain the behaviour without reference to consciousness. This makes consciousness epiphenomenal. The same is true of a computer program. The computer does what it does because its components follow the laws of physics. If the program were different the computer would have behaved differently, but the program could not have been different unless the configuration of the computer were different. The program is just a mental aid for the programmer to set up the computer to behave in a particular way, and has no separate causal potency of its own. -- Stathis Papaioannou From msd001 at gmail.com Sun Jan 17 05:54:39 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 17 Jan 2010 00:54:39 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B522E1B.10207@satx.rr.com> References: <701430.26678.qm@web36508.mail.mud.yahoo.com> <4B522E1B.10207@satx.rr.com> Message-ID: <62c14241001162154q14cd50aasa3e7bf2a80995370@mail.gmail.com> On Sat, Jan 16, 2010 at 4:22 PM, Damien Broderick wrote: > to rain. I suppose we come to a point here where the use of words has broken > down. Or perhaps the use of concepts. No point continuing the discussion. I agree. I considered suggesting a subject-line change to "meaningless discussion" then thought it'd be more productive to just create a filter so I stop seeing these threads in my inbox... From stathisp at gmail.com Sun Jan 17 06:49:33 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jan 2010 17:49:33 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <62c14241001162154q14cd50aasa3e7bf2a80995370@mail.gmail.com> References: <701430.26678.qm@web36508.mail.mud.yahoo.com> <4B522E1B.10207@satx.rr.com> <62c14241001162154q14cd50aasa3e7bf2a80995370@mail.gmail.com> Message-ID: 2010/1/17 Mike Dougherty : > On Sat, Jan 16, 2010 at 4:22 PM, Damien Broderick wrote: >> to rain. I suppose we come to a point here where the use of words has broken >> down. Or perhaps the use of concepts. No point continuing the discussion. > > I agree. ?I considered suggesting a subject-line change to > "meaningless discussion" then thought it'd be more productive to just > create a filter so I stop seeing these threads in my inbox... Wouldn't it be more productive still to explain what you mean by meaning, and see if the disagreement is a substantive one or has to do with the use of words? -- Stathis Papaioannou From eric at m056832107.syzygy.com Sun Jan 17 07:00:55 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 17 Jan 2010 07:00:55 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <267629.67426.qm@web36508.mail.mud.yahoo.com> References: <20100116214013.5.qmail@syzygy.com> <267629.67426.qm@web36508.mail.mud.yahoo.com> Message-ID: <20100117070055.5.qmail@syzygy.com> Gordon writes: >--- On Sat, 1/16/10, Eric Messick wrote: >> Would you say that "description != thing" is the reason >> computer systems cannot replicate the capability of brains to >> understand? > >In a general sense, yes. I think S/H systems can simulate > understanding as in weak AI but that they cannot have conscious > understanding as in strong AI. Putting these together, what we're discussing here is the truth value of the statement: "simulated understanding != real understanding", and what you might consider a corollary: "simulated consciousness != real consciousness". You seem to take these statements as axioms. I think that both consciousness and understanding are computational processes. Turing showed that beyond a very simple set of capabilities, all computational processes can be considered equivalent (modulo obvious performance differences). If both these things are true, then simulated consciousness must equal real consciousness, in violation of your axiom. Unless you want to argue with Turing, we can construct the following statement: If simulated consciousness is not equivalent to real consciousness, then consciousness is not a computational process. Do you agree that this is a true statement? Perhaps instead, you've taken the consequent part as your axiom, and derived your thoughts about simulated equivalence. In any case, it appears we have someone taking A as an axiom talking with someone taking ~A as an axiom. Not easy to resolve. >> Have you studied the molecular pathways that mediate these >> changes? > >I've already assumed the programmer knows everything possible about > the biology of experience. That sounds like a "no" to me, which is ok, except that you're trying to argue that something you haven't studied can't be adequately simulated. >> I think you have a very different meaning for the word >> "semantics" > >In the broadest sense I mean the having of conscious experience of > any kind, but we concern ourselves here mainly with the kinds of > mental contents most closely associated with intelligence, e.g., the > experience of understanding words and numbers. The word "experience" has two meanings which could be operating here: 1) we learn semantics by experience or, 2) semantics is the experience (or quale) of understanding. I think semantics are just facts of a particular type. Things that seem like that thing over there are called "balls". That's a semantic relationship, which helps to establish the meaning of the word "ball". The semantic fact remains even if I'm not experiencing it at the moment. I took a quick peek at the Wikipedia entry for Semantics, and there is very little there to support your usage. The closest I could find was this: In Chomskian linguistics there was no mechanism for the learning of semantic relations, and the nativist view considered all semantic notions as inborn. Thus, even novel concepts were proposed to have been dormant in some sense. This view was also thought unable to address many issues such as metaphor or associative meanings, and semantic change, where meanings within a linguistic community change over time, and qualia or subjective experience. Now, I wouldn't count Wikipedia as authoritative, but I'm going to presume your usage is non-standard unless you can show otherwise. This is probably the source of the "can syntax produce semantics" disagreement. It sounds like what you really mean by that is: syntax cannot produce the quale of understanding which is only a step or two away from the axiom I speculate on you holding above. Hence the speculation that you're holding this as an axiom too. Are you ready to take the blue pill and try out living in a world where ~A is true? Non-euclidean geometry is fun! -eric From stathisp at gmail.com Sun Jan 17 09:19:13 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jan 2010 20:19:13 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100117070055.5.qmail@syzygy.com> References: <20100116214013.5.qmail@syzygy.com> <267629.67426.qm@web36508.mail.mud.yahoo.com> <20100117070055.5.qmail@syzygy.com> Message-ID: 2010/1/17 Eric Messick : > Gordon writes: >>--- On Sat, 1/16/10, Eric Messick wrote: >>> Would you say that "description != thing" is the reason >>> computer systems cannot replicate the capability of brains to >>> understand? >> >>In a general sense, yes. I think S/H systems can simulate >> understanding as in weak AI but that they cannot have conscious >> understanding as in strong AI. > > Putting these together, what we're discussing here is the truth value > of the statement: "simulated understanding != real understanding", and > what you might consider a corollary: "simulated consciousness != real > consciousness". > > You seem to take these statements as axioms. > > I think that both consciousness and understanding are computational > processes. ?Turing showed that beyond a very simple set of > capabilities, all computational processes can be considered equivalent > (modulo obvious performance differences). ?If both these things are > true, then simulated consciousness must equal real consciousness, in > violation of your axiom. > > Unless you want to argue with Turing, we can construct the following > statement: > > ?If simulated consciousness is not equivalent to real consciousness, > ?then consciousness is not a computational process. > > Do you agree that this is a true statement? I am guessing that Gordn's answer will be an emphatic "yes". > Perhaps instead, you've taken the consequent part as your axiom, and > derived your thoughts about simulated equivalence. > > In any case, it appears we have someone taking A as an axiom talking > with someone taking ~A as an axiom. ?Not easy to resolve. I've tried to show with the brain replacement thought experiment what simulated understanding done properly would be like. It would mean that if your understanding of language, or your perceptions, or emotions, or any other important aspect of consciousness were suddenly zombified your behaviour would remain unchanged and you would not notice that anything odd had happened. If there is no subjective or objective difference between simulated (zombie) understanding and real understanding, then it seems absurd to insist that they are different things. I believe that Gordon can see this also, which is why he initially said that the experiment was too ridiculous to contemplate and now that the zombie implant won't really be a zombie implant. -- Stathis Papaioannou From stefano.vaj at gmail.com Sun Jan 17 13:27:43 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 17 Jan 2010 14:27:43 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <315789.42273.qm@web36505.mail.mud.yahoo.com> References: <315789.42273.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21001170527v309f34f4vc519351b1e0cb525@mail.gmail.com> 2010/1/16 Gordon Swobe : > If anything is ridiculous, it's the notion that either the simple or the complex versions of the program will cause a conscious entity to pop into the world to say hello. This is why "conscious entities" in such mystical sense probably do not exist at all... -- Stefano Vaj From stathisp at gmail.com Sun Jan 17 13:34:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jan 2010 00:34:22 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <580930c21001170527v309f34f4vc519351b1e0cb525@mail.gmail.com> References: <315789.42273.qm@web36505.mail.mud.yahoo.com> <580930c21001170527v309f34f4vc519351b1e0cb525@mail.gmail.com> Message-ID: 2010/1/18 Stefano Vaj : > 2010/1/16 Gordon Swobe : >> If anything is ridiculous, it's the notion that either the simple or the complex versions of the program will cause a conscious entity to pop into the world to say hello. > > This is why "conscious entities" in such mystical sense probably do > not exist at all... I assume when people say that they don't really mean what it sounds like, but rather they mean that consciousness is just an epiphenomenon or necessary side-effect of intelligent behaviour. -- Stathis Papaioannou From gts_2000 at yahoo.com Sun Jan 17 19:03:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 17 Jan 2010 11:03:36 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B522E1B.10207@satx.rr.com> Message-ID: <145747.71057.qm@web36507.mail.mud.yahoo.com> --- On Sat, 1/16/10, Damien Broderick wrote: > On 1/16/2010 3:10 PM, Gordon Swobe wrote: > >> So....? you understand a computation took place >> in your CPU, not unlike the child who added 3 fingers to 5 >> and got 8. >> >> Do you think your CPU understood it? If you do then I >> wonder if you also think the child's fingers understood it. > > But John Clark has just stated unequivocally that *he* at > least thinks so. Yep, the fingers *understood* the > calculation. The cloud *understands* how to rain. I suppose > we come to a point here where the use of words has broken > down. Or perhaps the use of concepts. Minds sometimes wander off onto the wrong subjects. John's wanders off onto the wrong objects. -gts From bbenzai at yahoo.com Sun Jan 17 18:37:07 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 17 Jan 2010 10:37:07 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <981589.35786.qm@web113610.mail.gq1.yahoo.com> Eric Messick asked: > > Gordon writes: > >In a nutshell: the human brain/mind has capabilities > that > > software/hardware systems do not and cannot have. > Ergo, we cannot > > duplicate brains on s/h systems; strong AI is false. > > and, later: > > >I allow that most everything in the world including the > brain lends > > itself to computation. But this fact means nothing. A > computational > > description of a thing amounts to nothing more than a > description of > > the thing, and descriptions of things do not equal the > things they > > describe. > > Would you say that "description != thing" is the reason > computer > systems cannot replicate the capability of brains to > understand? Gordon does not seem to appreciate the difference between a description and a simulation, and presumably thinks that possessing a shopping list is equivalent to doing the shopping. This in addition to believing that only a biological creature can do 'real' shopping, and if a robot does it, it's not real, regardless of the fact that the fridge still gets filled up. Ben Zaiboc From gts_2000 at yahoo.com Sun Jan 17 19:26:58 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 17 Jan 2010 11:26:58 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: <969914.98305.qm@web113608.mail.gq1.yahoo.com> Message-ID: <802682.54887.qm@web36502.mail.mud.yahoo.com> --- On Sat, 1/16/10, Ben Zaiboc wrote: > You're saying that only digital computers can be simulated by digital > computers.? Yes. More precisely, if I simulate a digital computer or program on a digital computer then I do not call that thing a simulation. I call it a copy. But as you've gathered I do not believe real brains exist as digital computers. I can in theory simulate them but I cannot copy them. > It's trivially obvious that this can't be > true.? We routinely simulate many processes that aren't > in themselves digital (let alone digital computers), on > digital computers. Yes we certainly do simulate many non-digital processes on digital computers, but those simulations never become more than mere simulations. I've used the water simulation as an example: a computer simulation of frozen water only appears solid, and a simulation of water in liquid form only appears liquid, and so on. They appear to have these properties but they do not actually have them. Computer simulations of things never equal the things they simulate, except that you may choose to imagine so. Just as computer simulations of ice-cubes will have no real property of solidity, computer simulations of human brains will have no real property of consciousness, except that you may choose to imagine so. -gts From stefano.vaj at gmail.com Sun Jan 17 21:54:25 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 17 Jan 2010 22:54:25 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <315789.42273.qm@web36505.mail.mud.yahoo.com> <580930c21001170527v309f34f4vc519351b1e0cb525@mail.gmail.com> Message-ID: <580930c21001171354md6a7cabwf8ed16f42c8703dc@mail.gmail.com> 2010/1/17 Stathis Papaioannou : > I assume when people say that they don't really mean what it sounds > like, but rather they mean that consciousness is just an epiphenomenon > or necessary side-effect of intelligent behaviour. No. A side-effect is something which exists per se. "Consciousness" is simply a description and/or "interpretation" of a set of phenomena - i.e., "outputs", behavioural or otherwise - which does not really refer to anything materially added to, or distinct from, such set. Any such distinction proves indeed elusive for its partisans, and immediately leads to paradoxes without being required for any sensible epistemological purpose. -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 17 22:08:26 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 17 Jan 2010 23:08:26 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <701430.26678.qm@web36508.mail.mud.yahoo.com> References: <580930c21001160923t448b791ch36874c715407842a@mail.gmail.com> <701430.26678.qm@web36508.mail.mud.yahoo.com> Message-ID: <580930c21001171408k408c71e3q9e3d86997d77671a@mail.gmail.com> 2010/1/16 Gordon Swobe : > Do you think your CPU understood it? If you do then I wonder if you also think the child's fingers understood it. The meaning of a symbol is simply its association with another symbol. I do not know what you mean by "understanding", but you can associate symbols with whatever method. Your brain - which usually should not require fingers to perform such calculation :-) - does not do anything really different from a computer CPU when it understands that the "meaning" of x is 3 and that of y is 4 and concludes that their sum is 7. It may do so in a slightly less efficient way, but the process remains obviously computational. -- Stefano Vaj From lcorbin at rawbw.com Sun Jan 17 23:11:07 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 17 Jan 2010 15:11:07 -0800 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded Message-ID: <4B53990B.8010202@rawbw.com> The central question, of course, is whether one would *survive* the uploading process. (This emphasis on the concept of survivability I first saw in Parsons (1984), "Reasons and Persons", which was the very first up-to-speed account of identity in the English language, so far as I know.) We have discussed here what possible danger would attend the replacement of neurons by artificial ones, or to go further, say the replacement of an entire neural tract connecting, for example, the amygdala and the hippocampus. Whereas hitherto masses of neurons fired, we now have a relatively simple electric electric circuit, though it still has, of course, to implement the tens of millions of functional inputs and outputs realized by the former mass of neurons in the tract). The wrong question, which has been repeatedly voiced, is "would I notice?". This is completely wrong because *noticing* is an active neural behavior which itself is composed of the firings of millions of neurons. Of course no "noticing" of that kind would occur, because under the hypothesis, the entire effective circuit between the hippocampus and the amygdala has been functionally and accurately replaced by an electric one. Suppose you had a switch and a couple of movies to watch. When the switch is in position A your original neural tract operates, and when it's in B, the electric circuit acts instead. During the first movie you watch, you keep the switch in the A position, and then watch the second movie with it in the B position. It's just completely wrong to wonder whether or not you'd later be able to *report* (even to yourself) whether the first movie was somehow more vivid. To lose, even partially, that kind of subjective experience is an incoherent fear. Instead, the right question to ask is "Would I have *less* experience, even though being completely unable to report---even to myself---that this was the case?". This is a coherent fear: one does not wish to be zombiefied, not even a little, unless there were no medical alternative to curing some malady. And---so this coherent (but I think quite wrong) view goes---the ultimate end of replacing all of your brain by electronic circuitry would be the complete loss of there being a subject (you) at all! Which is entirely equivalent to death. In the language of some, here, no more "qualia", and no more experience. We come right back to the fundamental question: does the functional equivalent supply the subjectivity, i.e., supply the "qualia" of existence? To me it seems completely bizarre and extremely unlikely that somehow nature would have chosen to bestow a "beingness" or consciousness on one peculiar way for mammals to be successful: our way, with gooey neurons and neurotransmitters. And that had electronic or other means of accomplishing the same ends for exactly the same kinds of creature with high fitness been supplied by nature instead, then magically no consciousness, no qualia, and no subject. It sounds absurd even to write out such a claim. Lee From gts_2000 at yahoo.com Mon Jan 18 01:07:24 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 17 Jan 2010 17:07:24 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <908256.37431.qm@web36501.mail.mud.yahoo.com> --- On Sat, 1/16/10, Stathis Papaioannou wrote: >> You keep assigning absolute atomic status to neurons >> and their behaviors, forgetting that just as the brain is >> made of neurons, neurons are made of objects too. Those >> intra-neuronal objects have as much right to claim atomic >> status as does the neuron, and larger inter-neuronal >> structures can also make that claim. > > Everything I've said applies equally well if you consider > simulating the behaviour of subneuronal or multineuronal structures. > Neurons are just a convenient unit to work with. If you really thought so then you would consider the brain as the atomic unit. This seems to me the only sensible approach given our limited knowledge of actual neuroscience. But it looks as you prefer to draw conclusions from extremely speculative predictions about the experiences and behaviors of partial brain-replacement frankenstein monsters. It just misses the point. Either the brain is a computer or it's not, and we can know the answer without torturing anyone in the hospital with crazy experiments. You don't yet see this, and I accept the blame for wasting so much time on the fun. PS. Not enough time for me to answer messages the way I would like. Frustrating because I do enjoy the discussion. Just a busy thread going on here now along with a busier life. -gts From stefano.vaj at gmail.com Mon Jan 18 10:56:30 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 18 Jan 2010 11:56:30 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <4B53990B.8010202@rawbw.com> References: <4B53990B.8010202@rawbw.com> Message-ID: <580930c21001180256y14fb688fn20b82111216b697d@mail.gmail.com> 2010/1/18 Lee Corbin : > The central question, of course, is whether one would > *survive* the uploading process. I am under the impression that this betrays a more general fear which probably originates from the inapplicability of evolution-encoded reactions to novel scenarios. For instance, does one survive entire material destruction? The evolution-encoded answer to that, by extension from massive bodily harm leading to death, is "obviously not", irrespective of various consolatory religious theories to the contrary. But what about teleport? The truth is that the death of the original individual and the birth of a copy, or the continued existence of the former, are both plausible ways of describing an hypothetical event the nature of which does not change in the least depending on our view thereof. Another classical Gedankenexperiment: what about an operation where my neurons are replaced one by one by other, functionally equivalent,... carbon-based neurons, until none remains? Do I die? And when? This is why I think that the curious idea that the interesting thing in organic brains would not be the kind of information processing they perform and their performance in such task, but some other, undefined and elusive, quality, is a matter of fear which cannot be overcome with rational argument. -- Stefano Vaj From stathisp at gmail.com Mon Jan 18 11:11:25 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jan 2010 22:11:25 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <4B53990B.8010202@rawbw.com> References: <4B53990B.8010202@rawbw.com> Message-ID: 2010/1/18 Lee Corbin : > The central question, of course, is whether one would > *survive* the uploading process. > > (This emphasis on the concept of survivability I first > saw in Parsons (1984), "Reasons and Persons", which was > the very first up-to-speed account of identity in the > English language, so far as I know.) I assume you mean Parfit? > We have discussed here what possible danger would > attend the replacement of neurons by artificial ones, > or to go further, say the replacement of an entire > neural tract connecting, for example, the amygdala and > the hippocampus. Whereas hitherto masses of neurons > fired, we now have a relatively simple electric electric > circuit, though it still has, of course, to implement > the tens of millions of functional inputs and outputs > realized by the former mass of neurons in the tract). I'm not sure what a lesion in the place you describe would do. To make the example easier, can we talk about the visual cortex? > The wrong question, which has been repeatedly voiced, > is "would I notice?". This is completely wrong because > *noticing* is an active neural behavior which itself > is composed of the firings of millions of neurons. > Of course no "noticing" of that kind would occur, > because under the hypothesis, the entire effective > circuit between the hippocampus and the amygdala > has been functionally and accurately replaced by an > electric one. You wouldn't notice if the replacement, however it was implemented, functioned in exactly the same way as the original, including producing the same consciousness. I think even someone who believed in a soul would have to agree with that statement, although they might say that only God could make such a functional replacement. The problem is to postulate that it is possible to make a replacement that is functionally identical except for the consciousness component, and see where this leads. I think it leads to absurdity, and my conclusion is that it is therefore *not* possible to make functionally identical brain components (and by extension, brains) without consciousness. > Suppose you had a switch and a couple of movies > to watch. When the switch is in position A your > original neural tract operates, and when it's in > B, the electric circuit acts instead. During the > first movie you watch, you keep the switch in > the A position, and then watch the second movie > with it in the B position. It's just completely > wrong to wonder whether or not you'd later be able > to *report* (even to yourself) whether the first > movie was somehow more vivid. > > To lose, even partially, that kind of subjective > experience is an incoherent fear. Yes. > Instead, the right question to ask is "Would I have > *less* experience, even though being completely unable > to report---even to myself---that this was the case?". > This is a coherent fear: one does not wish to be > zombiefied, not even a little, unless there were > no medical alternative to curing some malady. I don't see how this question is any different. If my entire visual cortex were removed then I would have *no* visual experience, and I would certainly notice, as would anyone who asked me about the movie. If my entire visual cortex were zombified then again I would have *no* visual experience, but I would report seeing normally, I would have the same emotional reactions to the movie, I would be able to describe it appropriately, and I would honestly believe that nothing had changed. In both cases I am completely blind, but in the latter case I don't notice it, and neither does anyone else. Suppose you still think that it is coherent to speak of a distinction between real vision and zombie vision. How do you know which sort of vision you have right now? How do you know if the human visual cortex, due to its complexity, evolved with only zombie vision? (Of course, this fact would have to be revealed rather than discovered, since no scientific test or subjective report would count for or against it). And if you did have this defect and were offered an operation to correct, knowing that everyone else who has had this operation behaves just the same as before and says that everything looks just the same as before, would you have it? > And---so this coherent (but I think quite wrong) view > goes---the ultimate end of replacing all of your brain > by electronic circuitry would be the complete loss > of there being a subject (you) at all! Which is entirely > equivalent to death. In the language of some, here, > no more "qualia", and no more experience. Your qualia would be replaced with zombie qualia, which are indistinguishable from and just as good as normal qualia. Note that this is different to having *no* qualia. A human with a visual cortex lesion still responds to visual stimuli as evidenced by the pupillary reflex, and in cases of so-called blindsight can correctly describe objects shown to him while claiming that he sees nothing. The opposite can happen in Anton's syndrome: blind patients stumble around walking into things while maintaining the delusional belief that they can see normally. Also, a true philosophical zombie has no qualia at all, and no understanding that it has no qualia (because it has no understanding of anything). If this zombie's visual cortex were put in your head you would immediately say that you had gone blind, and you would indeed have gone blind, because it could not be functioning like normal brain tissue, sending normal outputs to the rest of your brain. If it had been functioning this way, then it would not be from an unconscious zombie, but from a normally conscious being or a being with zombie consciousness indistinguishable from normal consciousness. > We come right back to the fundamental question: does > the functional equivalent supply the subjectivity, > i.e., supply the "qualia" of existence? > > To me it seems completely bizarre and extremely > unlikely that somehow nature would have chosen to > bestow a "beingness" or consciousness on one > peculiar way for mammals to be successful: > our way, with gooey neurons and neurotransmitters. > And that had electronic or other means of > accomplishing the same ends for exactly the > same kinds of creature with high fitness been > supplied by nature instead, then magically no > consciousness, no qualia, and no subject. > > It sounds absurd even to write out such a claim. It's not absurd, just very unlikely to be the case. But zombie consciousness is absurd. -- Stathis Papaioannou From stefano.vaj at gmail.com Mon Jan 18 11:58:22 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 18 Jan 2010 12:58:22 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> Message-ID: <580930c21001180358r21050d36n57fdbd335267e367@mail.gmail.com> 2010/1/18 Stathis Papaioannou : > But zombie > consciousness is absurd. Just paradoxical. A real, perfect zombie thinks to be conscious and saying that he is "mistaken" involves some essentialist concept of consciousness to which nothing "phenomenically identifiable" corresponds. -- Stefano Vaj From stathisp at gmail.com Mon Jan 18 12:01:11 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jan 2010 23:01:11 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <908256.37431.qm@web36501.mail.mud.yahoo.com> References: <908256.37431.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/18 Gordon Swobe : > --- On Sat, 1/16/10, Stathis Papaioannou wrote: > >>> You keep assigning absolute atomic status to neurons >>> and their behaviors, forgetting that just as the brain is >>> made of neurons, neurons are made of objects too. Those >>> intra-neuronal objects have as much right to claim atomic >>> status as does the neuron, and larger inter-neuronal >>> structures can also make that claim. >> >> Everything I've said applies equally well if you consider >> simulating the behaviour of subneuronal or multineuronal structures. >> Neurons are just a convenient unit to work with. > > If you really thought so then you would consider the brain as the atomic unit. This seems to me the only sensible approach given our limited knowledge of actual neuroscience. But it looks as you prefer to draw conclusions from extremely speculative predictions about the experiences and behaviors of partial brain-replacement frankenstein monsters. It just misses the point. If it is possible to replace part of the brain leaving behaviour unchanged then the obvious next step is to replace the whole brain, and the patient with the whole brain replacement would not be a zombie either, since it is absurd to think that you would be 100% conscious with 99% of your brain replaced and 0% conscious after the last 1% is replaced. The proposed experiments might be speculative insofar as they cannot be carried out today, but I think they are perfectly easily understandable, and they break no logical or physical law. > Either the brain is a computer or it's not, and we can know the answer without torturing anyone in the hospital with crazy experiments. You don't yet see this, and I accept the blame for wasting so much time on the fun. The brain may not be a computer but its function, including consciousness, may be able to be replicated by another machine, including a computer, just as machines can replicate the function of all sorts of other things found in nature. You don't need to do the experiments to draw conclusions from them, just as you don't need to build a Chinese Room to draw conclusions from that. Unlike the CR, we will probably be in a position one day to replace damaged neural tissue with electronic prostheses. So it's important to know what you would actually make of this, given that the patients will come out of the surgery saying they feel well. You either have to say that they are partial zombies who only think that they feel well (as Lee Corbin thinks is possible) or, if you think this is incoherent, that the electronic prostheses must have consciousness in them as well, whether by virtue of their programming, their matter, or because it must be instilled in them by God to keep the universe consistent. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Jan 18 12:04:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 18 Jan 2010 04:04:02 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies Message-ID: <64697.50664.qm@web36501.mail.mud.yahoo.com> --- On Sun, 1/17/10, Ben Zaiboc wrote: > Gordon does not seem to appreciate the difference between a > description and a simulation It seems some people here confuse or simply never understood the important difference between digital simulations and digital copies. Digital simulations = digital descriptions of non-digital objects and processes running on digital computers. Digital copies = digital duplications of digital objects. digital copy = digital duplication digital simulation = digital description Digital copies equal the original objects. For example one might load Word onto digital computer A and load another copy of Word onto digital computer B. Those two applications running on A and B now exist as copies/duplicates of the original. What happens if the original object does not exist as a digital object? I.e., what if the original object does not exist as a digital computer or as a program like Word that runs on one? In that case we can do no more than create a digital simulation of the non-digital object. And digital simulations of non-digital objects do NOT equal the original objects. They merely describe them. For example you might create a digital simulation of an apple on your digital computer. Your digital simulation of an apple will appear very much like a real apple, but you will find it difficult to eat. The reason you cannot eat that apple should be pretty obvious: it's not really an apple. It's merely a digital simulation of a non-digital object. While such digital simulations of non-digital objects are very possible and very common, digital copies of non-digital objects and processes are logically, philosophically and technologically impossible. They do not exist. So, on digital computers we can... 1) create digital copies of digital objects, copies which retain all the properties of the originals. and we can also... 2) create digital simulations of non-digital objects, simulations which lose the real properties of the originals. PS. Like most everything in the natural world including apples, the human brain appears to be a non-digital object. It's just one very smart apple. -gts From stathisp at gmail.com Mon Jan 18 12:17:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jan 2010 23:17:51 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <580930c21001180358r21050d36n57fdbd335267e367@mail.gmail.com> References: <4B53990B.8010202@rawbw.com> <580930c21001180358r21050d36n57fdbd335267e367@mail.gmail.com> Message-ID: 2010/1/18 Stefano Vaj : > 2010/1/18 Stathis Papaioannou : >> But zombie >> consciousness is absurd. > > Just paradoxical. A real, perfect zombie thinks to be conscious and > saying that he is "mistaken" involves some essentialist concept of > consciousness to which nothing "phenomenically identifiable" > corresponds. A regular zombie doesn't know whether he is a zombie or not, since he has no mind and no knowledge of anything, but a human does know he is not a zombie. That's the orthodox view, though Dennett argues with good reason that even the regular zombie is an absurdity. But imagine if you could be a kind of zombie now and not know it; you honestly believe you are conscious, but you might in fact be blind, deaf and aphasic. The idea that at least the conscious know they are conscious is shown to be false, since they may actually have zombie consciousness. Thus we have zombies who behave as if they are conscious *and* honestly believe that they are conscious: conscious zombies, but with an inferior zombie consciousness. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 18 12:21:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jan 2010 23:21:12 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <64697.50664.qm@web36501.mail.mud.yahoo.com> References: <64697.50664.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/18 Gordon Swobe : > --- On Sun, 1/17/10, Ben Zaiboc wrote: > >> Gordon does not seem to appreciate the difference between a >> description and a simulation > > It seems some people here confuse or simply never understood the important difference between digital simulations and digital copies. > > Digital simulations = digital descriptions of non-digital objects and processes running on digital computers. > > Digital copies = digital duplications of digital objects. > > digital copy = ?digital duplication > digital simulation = digital description > > Digital copies equal the original objects. For example one might load Word onto digital computer A and load another copy of Word onto digital computer B. Those two applications running on A and B now exist as copies/duplicates of the original. > > What happens if the original object does not exist as a digital object? I.e., what if the original object does not exist as a digital computer or as a program like Word that runs on one? In that case we can do no more than create a digital simulation of the non-digital object. And digital simulations of non-digital objects do NOT equal the original objects. They merely describe them. > > For example you might create a digital simulation of an apple on your digital computer. Your digital simulation of an apple will appear very much like a real apple, but you will find it difficult to eat. The reason you cannot eat that apple should be pretty obvious: it's not really an apple. It's merely a digital simulation of a non-digital object. But a digital simulation of a clock will still tell the time. You have to show that consciousness is more like an apple than like the telling of time. -- Stathis Papaioannou From jonkc at bellsouth.net Mon Jan 18 14:59:25 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 18 Jan 2010 09:59:25 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B522E1B.10207@satx.rr.com> References: <701430.26678.qm@web36508.mail.mud.yahoo.com> <4B522E1B.10207@satx.rr.com> Message-ID: <5D5B32FD-FA00-4031-824B-F136EA351231@bellsouth.net> On Jan 16, 2010, Damien Broderick wrote: > John Clark has just stated unequivocally that *he* at least thinks so. Yep, the fingers *understood* the calculation. You picked that example not me, and you must have thought that the finger had an atom of the supremely important property intelligence or you would not have used it as a specimen, so I don't see what's so shocking in me claiming that it also had an atom of a trivial and completely useless property called "understanding". Trivial and completely useless according to you and Gordon at least. I just don't understand your position, not that it matters in the slightest of course. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 18 15:55:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 18 Jan 2010 10:55:55 -0500 Subject: [ExI] digital simulations, descriptions and copies. In-Reply-To: <64697.50664.qm@web36501.mail.mud.yahoo.com> References: <64697.50664.qm@web36501.mail.mud.yahoo.com> Message-ID: <03030A4F-AD10-49F6-9CB2-946F7518EC69@bellsouth.net> On Jan 18, 2010, at 7:04 AM, Gordon Swobe wrote: > > Digital copies equal the original objects. Yes, except that adjectives are not objects and neither am I. > the human brain appears to be a non-digital object Yes, but the human brain is not important, the human mind is. The mind is what the brain does and it's true there probably are some analog processes going on in that grey goo as well as digital ones, but even the best analog computers are terrible at precision because noise soon overwhelms them even when cooled to cryogenic temperatures; Mr. Heisenberg and his pesky principle also create insuperable difficulties. The cheapest and simplest digital computer is far more precise than the best analog computer. > digital simulations of non-digital objects do NOT equal the original objects. True. > They merely describe them. That's OK, that's all I need. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Mon Jan 18 17:05:55 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 18 Jan 2010 09:05:55 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <131730.10260.qm@web113619.mail.gq1.yahoo.com> Gordon Swobe pronounced: > On Sat, 1/16/10, Ben Zaiboc > You're saying that only digital computers can be simulated by >> digital computers.? > Yes. >> It's trivially obvious that this can't be >> true.? > Yes we certainly do simulate many non-digital processes Hilarious. OK, when someone answers "Yes" to the question "are you saying that X is true", AND to the observation that X can't be true, it's time to bring the discussion to an end. Gordon, you've been winding us all up haven't you? DOH! Congratulations on being able to keep it going for so long. This must be a bit like what those academics must have felt when they found they'd been hoodwinked with Andrew Bulhak's Postmodernism generator. Oh, well, at least it's led some of us to think more deeply about just why philosophical zombies are impossible, and is a good cautionary tale about why you shouldn't mix philosophy and neuroscience. I suppose we should thank you. Can we talk about something sensible now? Ben Zaiboc From stefano.vaj at gmail.com Mon Jan 18 18:54:30 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 18 Jan 2010 19:54:30 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <802682.54887.qm@web36502.mail.mud.yahoo.com> References: <969914.98305.qm@web113608.mail.gq1.yahoo.com> <802682.54887.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21001181054s4ef87a42lfcadfd09dbbe8c82@mail.gmail.com> 2010/1/17 Gordon Swobe > --- On Sat, 1/16/10, Ben Zaiboc wrote: > > > You're saying that only digital computers can be simulated by digital > > computers. > > Yes. > Come on, aren't we too quick in ignoring the difference that such a simulation involves? The fact that a PC is simulated on a Mac does not imply that the Mac really understands what being a PC may *mean* for the latter... While a real and a simulated PC may be behaviourally similar, everybody can see that the PC-on-a-Mac is nothing but a PC-zombie. ;-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Jan 18 21:40:40 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 18 Jan 2010 22:40:40 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> <580930c21001180358r21050d36n57fdbd335267e367@mail.gmail.com> Message-ID: <580930c21001181340l5571a39em2ed9a353a266ed06@mail.gmail.com> 2010/1/18 Stathis Papaioannou > A regular zombie doesn't know whether he is a zombie or not, since he > has no mind and no knowledge of anything In that case he is not a "regular" zombie, because it differs from a human being at least in one kind or another of tangible, phenomenical (internal?) showing of cosciousness that can imagined. > but a human does know he is > not a zombie. I contend that a human does not not know anything like that, because real zombies would be indistinguishable from "real" human beings including for the zombies themselves. The idea that at least the conscious know they are conscious > is shown to be false, since they may actually have zombie > consciousness. This is indeed my point. > Thus we have zombies who behave as if they are > conscious *and* honestly believe that they are conscious: conscious > zombies, but with an inferior zombie consciousness. > Only that there is no way to distinguish such "false consciousness" from the real thing. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Mon Jan 18 21:46:48 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Mon, 18 Jan 2010 14:46:48 -0700 Subject: [ExI] have you ever seen anything like this? In-Reply-To: <66893462A8AB44BA8E07C37D33BAC07D@spike> References: <66893462A8AB44BA8E07C37D33BAC07D@spike> Message-ID: The only thing I can think of as an explanation for the bird's behavior, Spike, is that he/she was engaged in a futile attempt to protect his/her nest/young. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles 2010/1/15 spike : > I have heard of humans who get caught up in the emotion of the battle, > suicidal rage etc, but I don't think I have ever seen it in any other > beast.? Here a woodpecker keeps coming back to fight, clearly not for self > defense or with any hope of actually devouring the serpent, but rather just > to injure or slay it: > > http://www.youtube.com/watch?v=14yxYTOdL38 > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From stathisp at gmail.com Mon Jan 18 22:05:10 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 19 Jan 2010 09:05:10 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <580930c21001181340l5571a39em2ed9a353a266ed06@mail.gmail.com> References: <4B53990B.8010202@rawbw.com> <580930c21001180358r21050d36n57fdbd335267e367@mail.gmail.com> <580930c21001181340l5571a39em2ed9a353a266ed06@mail.gmail.com> Message-ID: 2010/1/19 Stefano Vaj : > 2010/1/18 Stathis Papaioannou >> >> A regular zombie doesn't know whether he is a zombie or not, since he >> has no mind and no knowledge of anything > > In that case he is not a "regular" zombie, because it differs from a human > being at least in one kind or another of tangible, phenomenical (internal?) > showing of cosciousness that can imagined. The standard definition of a philosophical zombie is that it does not have any consciousness, only intelligent behaviour. This may be impossible if consciousness is a necessary accompaniment of intelligent behaviour, as my arm rising into the air is a necessary accompaniment of my deltoid muscle contracting and causing abduction of my humerus. >> Thus we have zombies who behave as if they are >> conscious *and* honestly believe that they are conscious: conscious >> zombies, but with an inferior zombie consciousness. > > Only that there is no way to distinguish such "false consciousness" from the > real thing. Yes, that was my point. What I'm calling a conscious zombie may be similar to Dennett's zimboe. Dennett thinks zombies are a disgrace to philosophy. -- Stathis Papaioannou From scerir at libero.it Mon Jan 18 22:14:57 2010 From: scerir at libero.it (scerir) Date: Mon, 18 Jan 2010 23:14:57 +0100 (CET) Subject: [ExI] quantum brains Message-ID: <32621423.1180761263852897121.JavaMail.defaultUser@defaultHost> And of course since I'm persuaded that some psi phenomena are real, *something* weird as shit is needed to account for them, something that can either do stupendous simulations in multiple worlds/superposed states, or can modify its state according to outcomes in the future. If that's not QM, it's something equally hair-raising that electronic computers aren't built to do. Damien Broderick -> http://physicsandcake.wordpress.com/2010/01/04/quantum-brains/ From lacertilian at gmail.com Mon Jan 18 21:16:43 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 18 Jan 2010 13:16:43 -0800 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: References: <64697.50664.qm@web36501.mail.mud.yahoo.com> Message-ID: It doesn't simplify things for me to think about whether or not a simulated apple is equivalent to a real apple. That's some kind of facsimile of a reductionist, absurdist argument that I can't quite put my finger on. It does nothing but confuse. I don't care whether or not I can eat a digital apple from where I stand now. Now, on the other hand, if I myself were a digital simulation... But, that is a small quibble. More important is the fact I can't find anything in Gordon's message that I can use to determine whether or not I understand the differences between simulations, descriptions and copies in the same way that he does. I'm not even sure why the question is important. Do I need to understand? I can't even conjure up an elucidating thought experiment if I don't know what the problem is relevant to in practical terms. So: I hereby request a dilemma that can be solved through shrewd discrimination between digital descriptions and digital duplications. If it cannot be solved any other way, all the better. If the answer is obvious, don't bother; an easy dilemma is no dilemma at all. From lacertilian at gmail.com Mon Jan 18 21:54:02 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 18 Jan 2010 13:54:02 -0800 Subject: [ExI] The Throughput of English Message-ID: I've known for a while that English is a really very bad language. It is mind-bogglingly riddled with double-standards (I before E, except after C, or in "sleigh", or...) and ambiguities ("one teaspoon". An implement? A volume?). It is straight-up pathologically counter-intuitive. I'm going to take this as a given for now. If anyone requires convincing, let me know. For almost as long, I've been very interested in constructing a novel language. I consider my demands very modest and practical: I want to be able to communicate concepts to other human beings, and, ideally, I'd like them to be the same concepts as I have in mind. Doing this in English, I've found, becomes exponentially more difficult as the concepts increase in complexity (C) and as the experiential rift (R) between speaker and listener widens. Set absolute simplicity and perfectly identical memories at 1, limitless complexity and perfectly alien mindsets at infinity: English Efficiency = 1/(C^2*R^2) We're just about at escape velocity already, and I even supplemented my point with algebra to dampen the effect! Algebra is substantially better than English, but a little too narrow to be suitable for everyday use. What I really want is a language that works like this: Language Efficiency = 1/C^(1/2) But I am a realist, and I would settle for one that works like this: Language Efficiency = 1/C*R^(1/2) It should be noted that these formulae are quite crude and incomplete. Certainly abstract skill in speaking plays a part, as well as a myriad other factors. Fortunately my purposes do not require a rigorous proof; I need only give a vague impression of a fundamentally better language. At this point, the question becomes: what would that language be like? I'm fairly convinced that it doesn't exist yet. R is conspicuously attenuated, whereas it is demonstrably endemic in every natural language I know anything about. I attribute the importance of R to the enormous descriptive power of analogies. Note, for example, the previous use of "escape velocity" to call to mind an object (the author) completely leaving the gravitational field (understanding) of a more massive object (the reader). That's right. I'm calling you fat. (Don't worry, it's a compliment in this case.) Analogies cannot be done away with entirely, and shouldn't be. I would argue that the only reason we're able to learn anything at all, as opposed to being trained or conditioned, is because our brains are hardwired to form analogies. So, a more efficient language would not have fewer analogies; it would have MORE analogies, and it would have them embedded directly in its dictionary. This is why R seems to plateau: because the common ground necessary for mutual understanding is built-in, you are not at a significant disadvantage if you haven't read Shakespeare and everyone else has. Or, in slightly more modern terms, if you've never seen Lost. So that formula does dictate certain qualities of the imaginary language it refers to. What about C? Why is it linear? Or, if you prefer, why is it exponential in English? Consider this: any quantity of coherent information, no matter how long or disconnected, conveys a single concept. The previous sentence conveyed one concept. This message, as a whole, conveys one concept. Even freaky optical illusions http://en.wikipedia.org/wiki/File:Two_silhouette_profile_or_a_white_vase.jpg convey just one solitary concept. You can put two concepts together to form a new concept, perhaps greater than the sum of its parts, and you can break most concepts into smaller pieces. What C represents is the complexity of a discrete concept, and when it gets high enough we English-speaking cretins tend to think more than one thing is being said. Read a book! How many concepts does it contain? You can put down almost any number, and there would be no way to prove you wrong. Like most imaginary things, that is just how concepts work. English does not respect this phenomenon; we are taught to divide our concepts up into "sentences", and our sentences into "words", and our words into "syllables", and, worst of all, we are expected to accept at face value the ludicrous notion that a syllable can be broken into individual "letters". (TIRADE BEGINNING HERE. READING OPTIONAL.) This is like dividing the universe into solids and fluids. It is bad and wrong. There is no longer any place for quark-gluon plasmas, Bose-Einstein condensates, neutron stars, or black holes. Or, for that matter, gelatin. Or people. The very underlying structure of English has us looking for infinitely sharp lines drawn between everything and everything else. We get extremely upset if our attempts at categorization are frustrated. See: wave-particle duality, phylogenetics, taxonomy of all sorts, gelatin, other infuriatingly non-Newtonian fluids. I am a big fan of categorization myself, but I've been burned enough to know that the universe makes a whole lot more sense when your sorting algorithm is hierarchical. We should stop putting things in boxes and start putting things in Venn diagrams. Start with a big obnoxious field of "THING", in which is contained all things. There are no things outside of THING. Not even "nothing". When we put in an area labeled "hot" and an area labeled "cold", we may be shocked to discover that there are things which are neither hot nor cold and things which are both hot AND cold. Note that nothing has been said as to whether or not such things are "real". The simple fact that we can talk about them makes them into things, and, as you well know, all things are within THING. It should become transparently obvious at this point that our intuitive notion of "opposites" is hideously flawed. Hot is simply not the opposite of cold; it is the opposite of "not-hot". I want to stress that this is not the kind of delusion which stems from mere stupidity. It is built directly into the core of your vocabulary, whether or not you're aware of it, running wild through your neocortex and slicing up everything coming out of your mouth into absurd gibberish lacking any nuance. And it's doing exactly the same thing to me. (TIRADE ENDING HERE. RECOGNITION OF IRONY OPTIONAL.) What all of this is leading up to is the fact that when we attempt to convey very complex concepts in English -- such as, for example, everything between the beginning of this sentence and the point at which I wrote "letters" -- we incur a rather steep concept-tax. The human brain (not mind, but brain) can only retain about seven items in short-term memory. See: Miller's Law, Chunking. Since I'm writing in English, I have already used many, many more than seven concepts, and -- here is where the trouble begins -- virtually none of them can be aggregated into larger coherent chunks. The last sentence alone, even for a fluent speaker such as myself, is pushing the seven-chunk limit. By now it's impossible to hold the entire paragraph in mind at once. I myself have to go by vague mnemonic approximations, resulting in nearly undetectable conceptual drift as I slowly lose track of what in the world I was talking about. What AM I talking about? Oh, right. C. It stands to reason that complex information can only be digested efficiently through use of efficient chunking, which English discourages. Strongly. Homonyms alone presumably wreak havoc. So, if one wants an efficient language, one should try to give as many opportunities for chunking as possible. I've thought of a few options. It's totally insensible to have an alphabet instead of a syllabary, for one thing. I think Japanese has it right, as far as the writing system goes, except for the peculiar division between hiragana and katakana and the fact it ultimately borrows all of its glyphs from elsewhere. Logograms save a lot of physical space, and I would not be surprised to find that they save a lot of mental space as well. The crucial turning point, I think, is the grammar. Certain concepts, many of them perfectly ordinary, fiercely resist proper English grammar. I've given this a lot of thought, and eventually determined that the only thing to do is adopt a version of (yes!) Reverse Polish notation. The reasons are exactly the same for linguistics as for mathematics. Compare and contrast: ((4+1*0)/7+6^(2+9))^(3/(5*8)) 4 1 0 * + 7 / 6 2 9 + ^ 3 5 8 * / ^ Now try saying it in English. Four plus one multiplied by... ugh! Already ambiguous! English does not come with an order of operations. Just add that to the list. I might be able to make do with commas, but in person the pauses would be irritating and laborious. Consider the numbers as nouns and the operators as verbs, and you see my point. But that isn't the only thing that happens. There are a few very interesting side effects; among many similar opportunities, the possibility is now open for an algebra of meaning. But this message is already insufferably long, owing to a terrible conversion rate between words and thoughts, so I'll leave it here for now. I could go on at some length, but I believe the thought is complete enough for someone else to pick up where I left off. (DRAMATIC TENSION BEGINNING HERE.) From jonkc at bellsouth.net Mon Jan 18 22:28:48 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 18 Jan 2010 17:28:48 -0500 Subject: [ExI] Hi Natasha (was:over posting) In-Reply-To: References: <331623.31673.qm@web36501.mail.mud.yahoo.com><08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> Message-ID: <94293831-FD4C-4423-AE5D-1BA9A715F3D3@bellsouth.net> On Jan 14, 2010, Natasha Vita-More wrote: > John, please be careful about over posting. Hi Natasha: I'd write to you privately but just to show how dumb I am I could not find your email address so must do so publicly. First of all let me make it very clear that I am not complaining, nobody believes in private property more than me and it's your list not mine so if you want me to post less often then I will do so. I would have put a throttle on my comments a few days ago but I only just now discovered your request as it was sent to the general list and not to me. But I am a little curious why you singled me out but not Gordon Swobe who has recently posted more than me and his posts have far far far less substance than mine. Ok I admit that last part was subjective and your mileage may vary, but I happen to think subjectivity is the most important thing in the observable universe. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Jan 18 23:06:38 2010 From: pharos at gmail.com (BillK) Date: Mon, 18 Jan 2010 23:06:38 +0000 Subject: [ExI] Hi Natasha (was:over posting) In-Reply-To: <94293831-FD4C-4423-AE5D-1BA9A715F3D3@bellsouth.net> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <94293831-FD4C-4423-AE5D-1BA9A715F3D3@bellsouth.net> Message-ID: On 1/18/10, John Clark wrote: > I'd write to you privately but just to show how dumb I am I could not find > your email address so must do so publicly. But..... ??? every message to the list, including Natasha's has the email address of the writer at the head of the message. ?????? BillK From thespike at satx.rr.com Mon Jan 18 23:11:37 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 18 Jan 2010 17:11:37 -0600 Subject: [ExI] quantum brains In-Reply-To: <32621423.1180761263852897121.JavaMail.defaultUser@defaultHost> References: <32621423.1180761263852897121.JavaMail.defaultUser@defaultHost> Message-ID: <4B54EAA9.1050102@satx.rr.com> On 1/18/2010 4:14 PM, Serafino cited: > -> http://physicsandcake.wordpress.com/2010/01/04/quantum-brains/ which links to which has these nice Fermi comments: <# Dave Bacon Says: September 4, 2008 at 4:41 pm | Reply A variant of this is to just append classical computational power of modern computers to our own brain. If you had the laptop I?m typing this on directly wired to your brain such that it could access some of your current thinking processes, then you?d be able to do some pretty amazing things. I mean if I could quickly do the number crunching I periodically write programs to carry out, well then, I?d be a fundamentally different intelligence wouldn?t I? Of course I wouldn?t be able to factor exceptionally fast?but a lot faster than I can factor right now :) Maybe the reason we don?t see intelligences out there is that they have all migrated to quantum brains and so they avoid any contact with classical beings who will destroy their coherence? # Geordie Says: September 4, 2008 at 4:51 pm | Reply Hi Dave! Yes you?re right, although it is possible that the number-crunching powers of our laptops could evolve in biological brains if there were sufficient selection pressure on these capabilities. I?m not sure if you were joking about that last bit, but it seems likely that ?migrating to quantum brains? would include decoupling from environments? maybe the end-point of the evolution of intelligence is floating in perfectly isolated spheres in deep outer space? You know, you could take this idea a step further. Let?s say there are a set of increasingly accurate but also increasingly difficult to ?see? physical theories T_1, T_2, ? where T_1 is classical physics, T_2 is quantum mechanics, T_3 is quantum gravity, T_4 is some crazy membrane whatnot, etc? imagine you can get increasing computational capability at every level of this hierarchy. Since each is increasingly ?difficult to see? for its precursors, in order to harness the capabilities at level T_j you probably have to do a lot to make sure those ?hard to see? effects can be used, which probably isolates you from the precursor levels?. in this picture the quantum brains would isolate themselves from the classical brains, and the quantum gravity brains (needing of course to be near black holes) would be hidden to the two precursor levels, etc. etc. etc.> From lcorbin at rawbw.com Mon Jan 18 23:29:19 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 18 Jan 2010 15:29:19 -0800 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <580930c21001180256y14fb688fn20b82111216b697d@mail.gmail.com> References: <4B53990B.8010202@rawbw.com> <580930c21001180256y14fb688fn20b82111216b697d@mail.gmail.com> Message-ID: <4B54EECF.3050403@rawbw.com> Stefano writes > Lee wrote > >> The central question, of course, is whether one would >> *survive* the uploading process. > > I am under the impression that this betrays a more general fear which > probably originates from the inapplicability of evolution-encoded > reactions to novel scenarios. What are some examples of evolution-encoded reactions to novel scenarios? > But what about teleport? The truth is that the death of the original > individual and the birth of a copy, or the continued existence of the > former, are both plausible ways of describing an hypothetical event > the nature of which does not change in the least depending on our view > thereof. > > Another classical Gedankenexperiment: what about an operation where my > neurons are replaced one by one by other, functionally equivalent,... > carbon-based neurons, until none remains? Do I die? And when? While I agree that people who adopt certain views about these novelties are often unshakable in their newfound beliefs, they're also being extremely reactionary in the following sense: it's perfectly clear that if one grew up teleporting here and there, and knew many people whose neurons had been successfully (i.e. functionally) replaced, then ONE WOULD HAVE NO SUCH CONCERNS. > This is why I think that the curious idea that the interesting thing > in organic brains would not be the kind of information processing they > perform and their performance in such task, but some other, undefined > and elusive, quality, is a matter of fear which cannot be overcome > with rational argument. And don't forget the "day-persons". These are people whose philosophic intuition is so backward and rudimentary that upon being exposed to materialism and the fact that they're made of atoms, and the fact that they must concede that they lose consciousness during sleep HENCEFORTH MAINTAIN THAT THEY ARE NOT THE SAME PEOPLE FROM DAY TO DAY. It just goes to show that if folks have sufficiently bad taste and poor intuition, then they can be capable of believing almost anything. Lee From stathisp at gmail.com Mon Jan 18 23:36:15 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 19 Jan 2010 10:36:15 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <4B54EECF.3050403@rawbw.com> References: <4B53990B.8010202@rawbw.com> <580930c21001180256y14fb688fn20b82111216b697d@mail.gmail.com> <4B54EECF.3050403@rawbw.com> Message-ID: 2010/1/19 Lee Corbin : > And don't forget the "day-persons". These are people whose > philosophic intuition is so backward and rudimentary that > upon being exposed to materialism and the fact that they're > made of atoms, and the fact that they must concede that > they lose consciousness during sleep HENCEFORTH MAINTAIN > THAT THEY ARE NOT THE SAME PEOPLE FROM DAY TO DAY. These folk are not necessarily so backward! They may maintain that they are not the same person from day to day but conclude as a result that being "the same person" is either not important or not what we originally thought it was. They might thus decide that teleportation is OK, because if you're afraid of teleportation, then you should also be afraid of going to sleep. -- Stathis Papaioannou From avantguardian2020 at yahoo.com Tue Jan 19 00:43:48 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 18 Jan 2010 16:43:48 -0800 (PST) Subject: [ExI] quantum brains In-Reply-To: <4B54EAA9.1050102@satx.rr.com> References: <32621423.1180761263852897121.JavaMail.defaultUser@defaultHost> <4B54EAA9.1050102@satx.rr.com> Message-ID: <486503.84769.qm@web65603.mail.ac4.yahoo.com> ----- Original Message ---- > From: Damien Broderick > To: scerir ; ExI chat list > Sent: Mon, January 18, 2010 3:11:37 PM > Subject: Re: [ExI] quantum brains > Maybe the reason we don?t see intelligences out there is that they have all > migrated to quantum brains and so they avoid any contact with classical beings > who will destroy their coherence? In the literature, it seems that the?main (if not only)?objection to the brain (and biological systems in general) utilizing quantum mechanics is the decoherence caused by the "warm and wet" environment of living systems. But?I don't think this?is the huge problem everybody makes it out to be. Physicists are closing in on macroscopic entanglement?even as we speak. They have made progress at maintaining entanglement between?macroscopic systems and?quantum systems?at relatively high temperatures. http://physicsworld.com/cws/article/news/24285 You have increasing reports of the importance of *photons* in intercellular communication. Indeed there are *photoreceptors* in the deep tissues of the brain which never see the light of day. http://www.jneurosci.org/cgi/content/full/19/10/3681 And furthermore, one of the oldest QM thought experiments out there involves the macroscopic quantum entanglement of a rather "warm and wet" cat with the radioactive decay of atoms. http://en.wikipedia.org/wiki/Schr%C3%B6dinger's_cat All and all, Penrose may not be spot on with his mechanism of microtubules and quantum gravity, but the general gist of his argument is compelling. To dismiss the possibility of QM effects in living organisms due to the "warm and wet" mantra is just lazy science. Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From jonkc at bellsouth.net Tue Jan 19 00:41:18 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 18 Jan 2010 19:41:18 -0500 Subject: [ExI] Hi Natasha (was:over posting) In-Reply-To: References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <94293831-FD4C-4423-AE5D-1BA9A715F3D3@bellsouth.net> Message-ID: <56DCC47E-E08C-4A15-B444-B03880C92EE9@bellsouth.net> On Jan 18, 2010, at 6:06 PM, BillK wrote: > > But..... ??? every message to the list, including Natasha's has the > email address of the writer at the head of the message. ?????? Good question. So why didn't Natasha write to me privately? I have been on this fucking list for 15 fucking years and if anybody fucking thinks they have expressed fucking Extropian fucking principles better than I fucking have let them fucking speak up right fucking now. And Natasha treats me like some clueless no nothing. I am pissed and I think I have every reason in the world to be pissed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Jan 19 01:02:21 2010 From: spike66 at att.net (spike) Date: Mon, 18 Jan 2010 17:02:21 -0800 Subject: [ExI] have you ever seen anything like this? In-Reply-To: References: <66893462A8AB44BA8E07C37D33BAC07D@spike> Message-ID: > ...On Behalf Of Jeff Davis > Subject: Re: [ExI] have you ever seen anything like this? > > http://www.youtube.com/watch?v=14yxYTOdL38 > > > The only thing I can think of as an explanation for the > bird's behavior, Spike, is that he/she was engaged in a > futile attempt to protect his/her nest/young. > > Best, Jeff Davis Ja that is what I concluded as well. I was thrown off by the fact that the plumage made me think that this was a male bird, and the males seldom will go to such lengths do stop their chicks from being devoured. I looked up the red headed woodpecker, and I realize the plumage doesn't vary that much between genders, so this one was likely the mama, which are known to perform such heroics when duty calls. I found myself cheering for the bird bigtime. spike From thespike at satx.rr.com Tue Jan 19 01:49:58 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 18 Jan 2010 19:49:58 -0600 Subject: [ExI] IPCC investgates their own glacier forecast Message-ID: <4B550FC6.5070406@satx.rr.com> Tuesday, 19 January 2010 Agence France-Presse PARIS, NEW DELHI: The U.N.'s panel of climate scientists said they will investigate claims its own doomsday prediction for the disappearance of Himalayan glaciers by 2035 is mistaken. In 2007, the Nobel Prize-winning Intergovernmental Panel on Climate Change (IPCC) warned that glaciers in the Himalayas were receding faster than in any other part of the world and could "disappear altogether by 2035 if not sooner". At the weekend, Britain's Sunday Times newspaper reported that this reference came from the green campaign group WWF, which in turn took it from a magazine interview given by an Indian glaciologist in 1999. There is no evidence that the claim was published in a peer-reviewed journal, a cornerstone of scientific credibility, it said. "We are looking into the issue of the Himalayan glaciers, and will take a position on it in the next two or three days," the IPCC's chairman, Rajendra Pachauri said. The Sunday Times reported that the IPCC was likely to retract the figure, which would be a humiliation and a further boost for climate sceptics after a scandal last month dubbed 'climategate'. Emails from scientists at Britain's University of East Anglia, a top centre for climate research, were leaked and seized on by sceptics last month as evidence that experts twisted data in order to dramatise global warming. Some of the thousands of messages expressed frustration at the scientists' inability to explain what they described as a temporary slowdown in warming. A leading glaciologist who contributed to the Fourth Assessment Report described the mistake as huge and said he had notified his colleagues of it in late 2006, months before publication. Loss of the Himalayan glaciers by 2035 would take two or three times the highest expected rate of global warming, said Georg Kaser of the Geography Institute at Austria's University of Innsbruck. "This number is not just a little bit wrong, but far out of any order of magnitude. It is as wrong as can be wrong. "To get this outcome, you would have to increase the ablation [ice loss] by 20 fold. You would have to raise temperatures by at least 12 degrees [Celsius]." "It is so wrong that it is not even worth discussing ... I pointed it out." Asked why his warning had not been heeded, Kaser pointed to "a kind of amateurism" among experts from the region who were in charge of the chapter on climate impacts, where the reference appeared. "They might have been good hydrologists or botanists, but they were without any knowledge in glaciology," he said. The Fourth Assessment Report said that the evidence for global warming was now "unequivocal", that the chief source for it was man-made and that there were already signs of climate change, of which glacial melt was one. The massive publication had the effect of a political thunderclap, triggering promises to curb greenhouse gases that had stoked the problem. Kaser said the core evidence of the Fourth Assessment Report remained incontrovertible. Fears of more "IPCC bashing" "I am careful in saying this, because immediately people will again engage in IPCC bashing, which would be wrong," he said. But he acknowledged that the process of peer review, scrutiny and challenge, which underpin the IPCC's reputation had "entirely failed" when it came specifically to the 2035 figure. The 2035 reference appeared in the second volume of the Fourth Assessment Report, a tome published in April 2007 that focussed on the impacts of climate change, especially on human communities. Part of the problem, said Kaser, was "everyone was focussed" on the first volume, published in February 2007, which detailed the physical science for climate change. Work on this volume was "much more attractive to the community" of glaciologists, and they had failed to pick up on the mistake that appeared in the second, he said. The question of glacial melt is a vital one for South Asia, as it touches on flooding or water stress with the potential to affect hundreds of millions of lives. Indian scientists are split on how fast Himalayan glaciers are receding and whether or not climate change is responsible for this. Environment Minister Jairam Ramesh has repeatedly challenged the IPCC's claims, saying there is no "conclusive scientific evidence" linking global warming to the melting of glaciers. From msd001 at gmail.com Tue Jan 19 03:40:14 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 18 Jan 2010 22:40:14 -0500 Subject: [ExI] quantum brains In-Reply-To: <4B54EAA9.1050102@satx.rr.com> References: <32621423.1180761263852897121.JavaMail.defaultUser@defaultHost> <4B54EAA9.1050102@satx.rr.com> Message-ID: <62c14241001181940u5c2782e2xf903f2935328605b@mail.gmail.com> On Mon, Jan 18, 2010 at 6:11 PM, Damien Broderick wrote: > You know, you could take this idea a step further. Let?s say there are a set > of increasingly accurate but also increasingly difficult to ?see? physical > theories T_1, T_2, ? where T_1 is classical physics, T_2 is quantum > mechanics, T_3 is quantum gravity, T_4 is some crazy membrane whatnot, etc? > imagine you can get increasing computational capability at every level of > this hierarchy. Since each is increasingly ?difficult to see? for its > precursors, in order to harness the capabilities at level T_j you probably > have to do a lot to make sure those ?hard to see? effects can be used, which > probably isolates you from the precursor levels?. in this picture the > quantum brains would isolate themselves from the classical brains, and the > quantum gravity brains (needing of course to be near black holes) would be > hidden to the two precursor levels, etc. etc. etc.> I have recently encountered "the golden ratio" (phi) being found in something akin to quantum fractals, which set me thinking... This lead to string theory with it's 10 and 26 dimension description of fermions and boson. I have also recently encountered a very simplified explanation of Genetic Algorithms that finally made this concept click for me. So I considered the 10 or 26 dimensions of string theory to be roughly similar to genetic algorithmic coding for genes. Now typically physicists refer to the higher-than-4 dimensions of spacetime as 'rolled up' because the interval of these other dimensions is less than the Planck interval, so they apparently have no detectable impact on spacetime so the equations collapse neatly into the observable world. However, I wonder if these extra dimensions encode the universe's DNA in a computation space that is implemented as what we call the "real" universe. Perhaps even the fitness of these universal DNA is measured by their ability to model or evolve intelligence in the remaining 4D spacetime. If you can imagine the number of permutations of 26 dimensions (of which 4 are the observable spacetime universe) then perhaps each is a sibling in the current generation of universes being tested for the ability to support and advance intelligence/self-organization. In a computation-theory sense, these worlds may be "parallel" or cleverly coded to be computed in a serially and our perception of each is only a feature of the spacetime itself. (in a similar way, that gravity is believed by some to be a result of spacetime curvature or that spacetime's curvature is detected as gravity - this symmetry of causality may be a fundamental property of the manifold representing the universe) In response to the quoted paragraph above, the T_4 membrane thinker may currently be driving lower-level realities for the express purpose of an entire existence suited to the observation of the sound of raindrops on the windshield of a parked car or the experience of a dog fetching a ball... From stathisp at gmail.com Tue Jan 19 09:29:20 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 19 Jan 2010 20:29:20 +1100 Subject: [ExI] quantum brains In-Reply-To: <486503.84769.qm@web65603.mail.ac4.yahoo.com> References: <32621423.1180761263852897121.JavaMail.defaultUser@defaultHost> <4B54EAA9.1050102@satx.rr.com> <486503.84769.qm@web65603.mail.ac4.yahoo.com> Message-ID: 2010/1/19 The Avantguardian : > All and all, Penrose may not be spot on with his mechanism of microtubules and quantum gravity, but the general gist of his argument is compelling. To dismiss the possibility of QM effects in living organisms due to the "warm and wet" mantra is just lazy science. Penrose's application of Godel's theorem to human thinking as a basis for deducing that the brain is not computable has been dismissed as wrong by just about every critic. That's one of the main problems with the quantum brain idea: there is little reason to think that plain ordinary chemistry is not enough, other than a prejudice that because we feel special, our brains must also be special. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Jan 19 12:57:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 19 Jan 2010 04:57:33 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <702579.6131.qm@web36501.mail.mud.yahoo.com> --- On Mon, 1/18/10, Stathis Papaioannou wrote: > But a digital simulation of a clock will still tell the > time. You have to show that consciousness is more like an > apple than like the telling of time. The computationalist theory of mind, in which the brain is seen as a digital computer running software, does not explain how people can understand their own words. It seems then that those who advance the theory literally don't know what they're talking about. I don't pretend to fully understand the human brain/mind, but for now I have no choice but to accept the default position that it exists as a non-digital object in nature, as just one very smart apple. More on topic: At some level of description almost anything can be seen as digital. The high priests of computationalism noticed this mundane fact and made a religion out of it. They conflate the digital descriptions of things with the non-digital things they describe. -gts From kanzure at gmail.com Tue Jan 19 13:49:02 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 19 Jan 2010 07:49:02 -0600 Subject: [ExI] Fwd: [Cosmic Engineers] Fwd: FW: Immediate opening for part-time person to work with KurzweilAI In-Reply-To: References: <024601ca9862$64dd6b90$2e9842b0$@org> <24f36f411001190052v6ba6a937i252ed163a289e45@mail.gmail.com> Message-ID: <55ad6af71001190549m4b5d5ad4g67d73633a43acf21@mail.gmail.com> ---------- Forwarded message ---------- From: Giulio Prisco Date: Tue, Jan 19, 2010 at 3:15 AM Subject: [Cosmic Engineers] Fwd: FW: Immediate opening for part-time person to work with KurzweilAI To: cosmic-engineers at googlegroups.com ---------- Forwarded message ---------- From: Eli Mohamad Date: Tue, Jan 19, 2010 at 9:52 AM Subject: Fwd: FW: Immediate opening for part-time person to work with KurzweilAI To: euro-transhumanists at googlegroups.com Hi everyone! I'm not sure if you are aware of this opening but maybe someone from your organizations would like to apply. I wish they advertised this when I was at University and had spare time :) All the best and good luck to those who will apply! Eli From: Amara D. Angelica [mailto:amara at kurzweilai.net] Sent: Sunday, January 17, 2010 7:54 AM Subject: Immediate opening for part-time person to work with KurzweilAI Importance: High We have an immediate opening for part-time work with KurzweilAI on a project basis in helping us finalize our new website, working from a home office. There may also be an opportunity for ongoing part-time work. I would highly welcome your recommendations, and you are free to circulate the below announcement to MP members and others you recommend. Thanks very much. - Amara ?Part-time opening, working on a project for KurzweilAI We have an immediate opening for a person to work with KurzweilAI part-time on a project basis in helping us finalize our new website, working from a home office. There may also be an opportunity for future ongoing part-time work. Tasks include: - Read through the current articles, bios, and other documents on our website and make corrections, updates, and additions to information and photos. - Data entry and data conversion - Proofreading and copy-editing - Updating website URLs and page content; layout, and formatting of text and images, working in XHTML - Researching content (articles and blog items) and entering them in Wordpress posts - Compiling resources: lists of books, journals, magazines, events, websites, videos, people, books, and other information Requirements: - Excellent command of English; an experienced editor or writer preferred - Experience in doing similar work: developing or administering websites using XHTML or HTML and Wordpress (admin, editorial and data entry; programming not required) - Knowledge of and extensive exposure to futurism, Singularity, and related subjects - Grounding in science and technology and follows current developments - Expertise in copyediting and proofreading and excellent command of English - Perfectionist with obsession for detail and accuracy - Proven experience in meeting tight deadlines - Full access to a reliable high-speed Internet connection and computer on a 24x7 basis - At least 20 hours a week free for an undefined period of time, but probably one month or more - Expertise in computer use, including Word, Excel, PowerPoint, and a website editor, preferably Dreamweaver - A person who subscribes to and participates in a number of lists on KurzweilAI-related subjects is preferred - A KurzweilAI newsletter subscriber who is highly enthusiastic about the content of our website - Experienced user of Skype (text and voice), Twitter, and Facebook, and blog creation Please contact: Amara D. Angelica, Editor, KurzweilAI, http://kurzweilai.net, amara at kurzweilai.net with resume, recommendations, and information on availability -- You received this message because you are subscribed to the Google Groups "Cosmic Engineers" group. To post to this group, send email to cosmic-engineers at googlegroups.com. To unsubscribe from this group, send email to cosmic-engineers+unsubscribe at googlegroups.com. For more options, visit this group at http://groups.google.com/group/cosmic-engineers?hl=en. -- - Bryan http://heybryan.org/ 1 512 203 0507 From natasha at natasha.cc Tue Jan 19 17:03:38 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 19 Jan 2010 12:03:38 -0500 Subject: [ExI] Hi Natasha (was:over posting) Message-ID: <20100119120338.miocu8wk3oooskgw@webmail.natasha.cc> On Jan 14, 2010, Natasha Vita-More wrote: John, please be careful about over posting. John, Not everyone gets a speeding ticket at the same moment. Not all mentions of overposting are sent privately to list members/posters. Often a formal message is sent to overposters with a warning. I didn't think you needed a formal warning, considering your being a long-time list member/poster, so I sent a nudge/reminder to the thread. It is as simple as that. I am very sorry that I offended you and I apologize for not sending you a formal warning off list. Natasha Hi Natasha: I'd write to you privately but just to show how dumb I am I could not find your email address so must do so publicly. First of all let me make it very clear that I am not complaining, nobody believes in private property more than me and it's your list not mine so if you want me to post less often then I will do so. I would have put a throttle on my comments a few days ago but I only just now discovered your request as it was sent to the general list and not to me. But I am a little curious why you singled me out but not Gordon Swobe who has recently posted more than me and his posts have far far far less substance than mine. Ok I admit that last part was subjective and your mileage may vary, but I happen to think subjectivity is the most important thing in the observable universe. John K Clark From avantguardian2020 at yahoo.com Tue Jan 19 18:42:37 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Tue, 19 Jan 2010 10:42:37 -0800 (PST) Subject: [ExI] quantum brains Message-ID: <600620.59013.qm@web65616.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: ExI chat list > Sent: Tue, January 19, 2010 1:29:20 AM > Subject: Re: [ExI] quantum brains > > 2010/1/19 The Avantguardian : > > > All and all, Penrose may not be spot on with his mechanism of microtubules and > quantum gravity, but the general gist of his argument is compelling. To dismiss > the possibility of QM effects in living organisms due to the "warm and wet" > mantra is just lazy science. > > Penrose's application of Godel's theorem to human thinking as a basis > for deducing that the brain is not computable has been dismissed as > wrong by just about every critic. That's one of the main problems with > the quantum brain idea: there is little reason to think that plain > ordinary chemistry is not enough, other than a prejudice that because > we feel special, our brains must also be special. Whoa. I think you are crossing threads here, Stathis. I am not making any assertions about the computability or incomputability of brain function. I think that is a separate, although potentially related,?issue from whether the brain uses quantum mechanics to function. I am merely addressing what I percieve as a classical bias?to biology and brain function.?For the most part QM?*is* computable otherwise it would be worthless to physics.. While Godel's theorem is a mathematical theorem and not a physical theory.. I cannot think of an obvious application of Godel's theorem to physics and even most mathematicians can have valid?careers?by pretending it doesn't exist. With regard to "ordinary chemistry", QM itself?is ordinary because it is everywhere. It doesn't just happen when guys in white coats are looking for it. It's just that it blends?unnoticed into the intuition-friendly world of classical physics and chemistry most of the time. But consciousness *is* special in that it has yet to be mechanistically described short of the "wire a bunch of neurons together, let them fire off signals at one another, and voila consciousness" explanation. If it was that easy, it would have been?replicated already. ? That being said, there are a lot of parallels between how people and quantum particles behave. For one thing, they both behave probabilistically. One cannot predict a persons actions in response to a stimulus to the degree that one can predict say a falling brick, the oxidation of iron, or other straightfoward physical process. The best one can do is assign probabilities based on the previous history and the statistical analysis of large ensembles of similar people. While economists try to constrain predicted behavior by rationality, people, even rational people,?can and do act irrationally?under certain conditions. ? Stuart LaForge ? "Never express yourself more clearly than you think." - Niels Bohr From scerir at libero.it Tue Jan 19 21:18:00 2010 From: scerir at libero.it (scerir) Date: Tue, 19 Jan 2010 22:18:00 +0100 (CET) Subject: [ExI] quantum brains Message-ID: <29267833.1288581263935880891.JavaMail.defaultUser@defaultHost> Damien [quoting]: > You know, you could take this idea a step further. Let?s say there are a set of > increasingly accurate but also increasingly difficult to ?see? physical theories T_1, > T_2, ? where T_1 is classical physics, T_2 is quantum mechanics, T_3 is quantum gravity, > T_4 is some crazy membrane whatnot, etc? imagine you can get increasing computational > capability at every level of this hierarchy. Since each is increasingly ?difficult to > see? for its precursors, in order to harness the capabilities at level T_j you probably > have to do a lot to make sure those ?hard to see? effects can be used, which probably > isolates you from the precursor levels?. in this picture the quantum brains would > isolate themselves from the classical brains, and the quantum gravity brains (needing of > course to be near black holes) would be hidden to the two precursor levels, etc. etc. > etc. "It would be interesting to provide a toy model of an 'ultimate theory of not everything'. A scenario I can vaguely describe to illustrate the type of mathematics that could serve this purpose is the one of a toy Universe with logical structure resembling the one of Matryoshka nesting dolls, but an infinite series of nesting dolls and in a multitude of dimensions (amount of energy, amount of complexity, amount of classicality of the apparatus...). From 'within' each doll it should only be possible to get information on neighboring dolls, even arriving at the point of fully mastering some of the neighboring dolls, which would then be the starting point for the next step of exploration. I hope a mathematician will soon have (or tell me of the previous existence of) such a toy model for the ultimate theory of not everything, but I have none to offer at present." -Giovanni Amelino-Camelia 'The fairness principle and the ultimate theory of not everything' http://fqxi.org/data/essay-contest-files/AmelinoCamelia_fairnessShVe.pdf From scerir at libero.it Tue Jan 19 22:33:59 2010 From: scerir at libero.it (scerir) Date: Tue, 19 Jan 2010 23:33:59 +0100 (CET) Subject: [ExI] quantum brains Message-ID: <4130180.1302831263940439866.JavaMail.defaultUser@defaultHost> Whoa. I think you are crossing threads here, Stathis. I am not making any assertions about the computability or incomputability of brain function. I think that is a separate, although potentially related, issue from whether the brain uses quantum mechanics to function. I am merely addressing what I percieve as a classical bias to biology and brain function. For the most part QM *is* computable otherwise it would be worthless to physics. -Stuart LaForge If quantum information is indeed carried by quantum systems, the wavefunction might be the image of this information. While the quantum information cannot be written on paper, the wavefunction (in general) can be. Thus, it is possible to imagine that quantum information does exist, it is not the wavefunction, but it is only represented by it. This distinction seems important, speaking of the largely hypothetical quantum brain. Notice the double dichotomy: brain / mind, quantum wavefuntion / quantum information. From stathisp at gmail.com Wed Jan 20 00:12:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 20 Jan 2010 11:12:08 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <702579.6131.qm@web36501.mail.mud.yahoo.com> References: <702579.6131.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/19 Gordon Swobe : > --- On Mon, 1/18/10, Stathis Papaioannou wrote: > >> But a digital simulation of a clock will still tell the >> time. You have to show that consciousness is more like an >> apple than like the telling of time. > > The computationalist theory of mind, in which the brain is seen as a digital computer running software, does not explain how people can understand their own words. It seems then that those who advance the theory literally don't know what they're talking about. > > I don't pretend to fully understand the human brain/mind, but for now I have no choice but to accept the default position that it exists as a non-digital object in nature, as just one very smart apple. The "matter thinks" theory of mind can't explain where how people understand words either. My chair doesn't think, neither does a glass of water, even if you put salt and amino acids and whatever else you fancy in it. How could it? It's impossible! But people do think, so the mind must come from something other than matter. It can't come from matter processing information either. So it must be magic! > More on topic: At some level of description almost anything can be seen as digital. The high priests of computationalism noticed this mundane fact and made a religion out of it. They conflate the digital descriptions of things with the non-digital things they describe. The theory is that it is matter acting in a particular way that produces intelligence and that consciousness is a necessary accompaniment of intelligence. Your theory is that matter acting in a particular way produces intelligence and, independently of this, it produces consciousness. It is the possibility that intelligence and consciousness can be separated that is the primary problem in all this discussion. If they can be separated this leads to an absurdity: that you could be a zombie right now and not know it. This is absurd because although zombies don't know they are zombies, conscious people know they are *not* zombies, otherwise the distinction between zombies and conscious people becomes meaningless. You've responded to the thought experiment that leads to this absurdity by saying that it's crazy, and I take that as agreement. If consciousness and intelligence can't be separated and you still feel strongly about computers lacking consciousness, then you can consistently claim that computers can't have the sort of intelligence associated with consciousness. There is nothing in the laws of physics which says this isn't so, although also nothing in what we know to suggest it is so. -- Stathis Papaioannou From spike66 at att.net Wed Jan 20 02:24:11 2010 From: spike66 at att.net (spike) Date: Tue, 19 Jan 2010 18:24:11 -0800 Subject: [ExI] massachusetts special senate election In-Reply-To: <20100119120338.miocu8wk3oooskgw@webmail.natasha.cc> References: <20100119120338.miocu8wk3oooskgw@webmail.natasha.cc> Message-ID: <198B1BF0596C40C49F10DF324B56AB25@spike> Well, it was an admirable attempt, but we were crushed in the Massachusetts election, spanked like an ugly stepchild. Barely managed to get 1 percent of the vote. {8-[ Better luck next time! {8-] spike From stathisp at gmail.com Wed Jan 20 02:57:45 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 20 Jan 2010 13:57:45 +1100 Subject: [ExI] massachusetts special senate election In-Reply-To: <198B1BF0596C40C49F10DF324B56AB25@spike> References: <20100119120338.miocu8wk3oooskgw@webmail.natasha.cc> <198B1BF0596C40C49F10DF324B56AB25@spike> Message-ID: 2010/1/20 spike : > > Well, it was an admirable attempt, but we were crushed in the Massachusetts > election, spanked like an ugly stepchild. ?Barely managed to get 1 percent > of the vote. ?{8-[ "We" being? -- Stathis Papaioannou From spike66 at att.net Wed Jan 20 03:26:42 2010 From: spike66 at att.net (spike) Date: Tue, 19 Jan 2010 19:26:42 -0800 Subject: [ExI] massachusetts special senate election In-Reply-To: References: <20100119120338.miocu8wk3oooskgw@webmail.natasha.cc> <198B1BF0596C40C49F10DF324B56AB25@spike> Message-ID: <98AC07BA70094E42B16C09FEFFCBB81C@spike> > > 2010/1/20 spike : > > > > Well, it was an admirable attempt, but we were crushed in the > > Massachusetts election, spanked like an ugly stepchild. ?Barely > > managed to get 1 percent of the vote. ?{8-[ > > "We" being? Stathis Papaioannou We, the libertarians. Ugly result. Almost all the vote went to the two statist candidates. spike From nanite1018 at gmail.com Wed Jan 20 03:56:52 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Tue, 19 Jan 2010 22:56:52 -0500 Subject: [ExI] massachusetts special senate election In-Reply-To: <98AC07BA70094E42B16C09FEFFCBB81C@spike> References: <20100119120338.miocu8wk3oooskgw@webmail.natasha.cc> <198B1BF0596C40C49F10DF324B56AB25@spike> <98AC07BA70094E42B16C09FEFFCBB81C@spike> Message-ID: <484C1B95-CD42-4BF7-93EF-F510F405F57F@GMAIL.COM> >> 2010/1/20 spike : >>> >>> Well, it was an admirable attempt, but we were crushed in the >>> Massachusetts election, spanked like an ugly stepchild. Barely >>> managed to get 1 percent of the vote. {8-[ >> >> "We" being? Stathis Papaioannou > > We, the libertarians. Ugly result. Almost all the vote went to the > two > statist candidates. spike You know, I would rather have him in office with the possibility of slowing down the growth in government then help to grant the Dems a blank check. Would a libertarian be better? Eh, I suppose, though probably not when there's only one of them. Joshua Job nanite1018 at gmail.com From lcorbin at rawbw.com Wed Jan 20 04:50:34 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 19 Jan 2010 20:50:34 -0800 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> Message-ID: <4B568B9A.7010400@rawbw.com> Stathis Papaioannou wrote: > Also, a true philosophical zombie has no qualia at all, > and no understanding that it has no qualia > (because it has no understanding of anything). Yes. I believe that this is how those who believe in qualia use the terms. But to my surprise, you seem to believe that it makes sense to talk of qualia. You do? >> We come right back to the fundamental question: does >> the functional equivalent supply the subjectivity, >> i.e., supply the "qualia" of existence? When I use the term, I am forgiven===because I only do so to communicate with the heathen who don't understand. While I find it *conceivable* that there could be zombies, I consider it ridiculous, for many reasons, chief among them is that nature presumably could have economized by turning out zombies instead of us. Er, I mean instead of me. In fact, I strongly suspect that people who live in or are from the antipodes (including Australians) have no subjectivity; something about the reverse coriolus causes the quantum entanglement to fail to manifest true consciousness. In other words, you other hemisphere types on this list make a lot of sound and noise, but you don't truly mean it because you are not. Lee P.S. Even when you make sense: >> To me it seems completely bizarre and extremely >> unlikely that somehow nature would have chosen to >> bestow a "beingness" or consciousness on one >> peculiar way for mammals to be successful: >> our way, with gooey neurons and neurotransmitters. >> And that had electronic or other means of >> accomplishing the same ends for exactly the >> same kinds of creature with high fitness been >> supplied by nature instead, then magically no >> consciousness, no qualia, and no subject. >> >> It sounds absurd even to write out such a claim. > > It's not absurd, just very unlikely to be the case. But zombie > consciousness is absurd. From lcorbin at rawbw.com Wed Jan 20 05:26:55 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 19 Jan 2010 21:26:55 -0800 Subject: [ExI] "If I can't be me, then I might as well kill myself" Message-ID: <4B56941F.3000606@rawbw.com> Fuld Auditorium, January 9, 2024. Professor Harvey Flockenbird of Princeton gives a press conference in the wake of his startling paper---universally heralded by physicists---presenting the first full and complete Grand Unified Theory of physics. "Professor Flockenbird! Professor Flockenbird!", cry the reporters, "just one question!" But the questions go on and on until finally a young reporterette asks, "Dr. Flockenbird, in layman's terms is I am given to understand that in your theory every femto-second each particle of matter is completely disintegrated? How then is it possible for us to go on thinking of matter as having duration and being substantial?" "Well," says the professor, "the utter destruction of matter takes place on the order of zepto-seconds, a small fraction of a femto-second. Before a whole femtosecond is up, however, the *information* that constituted the matter---having been bounced around and re-digitized and analoged trillions of times--- causes particle quadrupoles to organize and recreate the matter, whether it be quark, gluon, or electron." "But then," she follows up, "doesn't that mean that each one of us, each human being, is instantaneously destroyed and re- created billions of times each second?" "Oh yes," he replies, "after all, our bodies are made of ordinary matter!" It was this statement, made on January 9 in Fuld Auditorium, that precipitated the wave of suicides across Europe and North America. "My life has been nothing but a tissue of lies," exclaimed Matthew Soleil, best-selling professor of philosophy just before he took his own life. "I'm not the person that I thought I was, and, worse than that, it's clear that I never was! Each second I become someone else, a complete stranger. Obviously, there is no point in going on. Because there is nobody "here" to go on for." A counterclaim was made by Christopher Reich of the University of Glasgow. "What we have here is, sadly, just evolution in action, although this is the first time that fitness has descended from philosophy. All of these poor unfortunates simply fail to understand that the continuity of their lives is fulfilled entirely by the continuity of the *information*, and that the actual particular sub-atomic particles, which do flash out of existence all the time, being replaced much later, are only temporary and unimportant carriers of the actual content." Alas, people were to be completely stubborn on this point, as they always had been about teleportation scenarios, and the last un-uploaded people all killed themselves not many years later as they grew to an age where they could finally understand the TOE of the early 21st century. Lee From stathisp at gmail.com Wed Jan 20 07:51:29 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 20 Jan 2010 18:51:29 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <4B568B9A.7010400@rawbw.com> References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> Message-ID: 2010/1/20 Lee Corbin : > Stathis Papaioannou wrote: > >> Also, a true philosophical zombie has no qualia at all, > >> and no understanding that it has no qualia >> >> (because it has no understanding of anything). > > Yes. I believe that this is how those who believe in qualia > use the terms. But to my surprise, you seem to believe > that it makes sense to talk of qualia. You do? Qualia, consciousness, subjectivity, experience, intentionality, understanding: What a ridiculous question! Of course I believe in these things! So do you! So does everyone who is able to believe anything! But I also believe that they are related to the information processing that goes on in my brain in the same way that raising my arm is related to contraction of my deltoid muscle causing abduction of my humerus. >>> We come right back to the fundamental question: does >>> the functional equivalent supply the subjectivity, >>> i.e., supply the "qualia" of existence? > > When I use the term, I am forgiven===because I only do so > to communicate with the heathen who don't understand. > > While I find it *conceivable* that there could be > zombies, I consider it ridiculous, for many reasons, > chief among them is that nature presumably could > have economized by turning out zombies instead of > us. The philosophical argument turns on the meaning of the term "conceivable". Chalmers says that zombies are conceivable, but probably physically impossible. Searle says they are both conceivable and physically possible. Dennett says they are not even conceivable, that is, the idea leads to a logical contradiction. I tend to agree with Dennett. -- Stathis Papaioannou From stathisp at gmail.com Wed Jan 20 08:35:50 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 20 Jan 2010 19:35:50 +1100 Subject: [ExI] quantum brains In-Reply-To: <600620.59013.qm@web65616.mail.ac4.yahoo.com> References: <600620.59013.qm@web65616.mail.ac4.yahoo.com> Message-ID: 2010/1/20 The Avantguardian : >> > All and all, Penrose may not be spot on with his mechanism of microtubules and >> quantum gravity, but the general gist of his argument is compelling. To dismiss >> the possibility of QM effects in living organisms due to the "warm and wet" >> mantra is just lazy science. >> >> Penrose's application of Godel's theorem to human thinking as a basis >> for deducing that the brain is not computable has been dismissed as >> wrong by just about every critic. That's one of the main problems with >> the quantum brain idea: there is little reason to think that plain >> ordinary chemistry is not enough, other than a prejudice that because >> we feel special, our brains must also be special. > > Whoa. I think you are crossing threads here, Stathis. I am not making any assertions about the computability or incomputability of brain function. I think that is a separate, although potentially related,?issue from whether the brain uses quantum mechanics to function. I am merely addressing what I percieve as a classical bias?to biology and brain function.?For the most part QM?*is* computable otherwise it would be worthless to physics.. While Godel's theorem is a mathematical theorem and not a physical theory.. I cannot think of an obvious application of Godel's theorem to physics and even most mathematicians can have valid?careers?by pretending it doesn't exist. Penrose thinks there is an as yet undiscovered theory of quantum gravity which is uncomputable and which is an essential part of brain function. > With regard to "ordinary chemistry", QM itself?is ordinary because it is everywhere. It doesn't just happen when guys in white coats are looking for it. It's just that it blends?unnoticed into the intuition-friendly world of classical physics and chemistry most of the time. But consciousness *is* special in that it has yet to be mechanistically described short of the "wire a bunch of neurons together, let them fire off signals at one another, and voila consciousness" explanation. If it was that easy, it would have been?replicated already. We haven't been able to make self-repairing, self-replicating machines, and nature has been doing it for billions of years. It's possible that we will be able to upload minds before we can make artificial organisms. But that doesn't mean that vitalism is correct. > That being said, there are a lot of parallels between how people and quantum particles behave. For one thing, they both behave probabilistically. One cannot predict a persons actions in response to a stimulus to the degree that one can predict say a falling brick, the oxidation of iron, or other straightfoward physical process. The best one can do is assign probabilities based on the previous history and the statistical analysis of large ensembles of similar people. While economists try to constrain predicted behavior by rationality, people, even rational people,?can and do act irrationally?under certain conditions. You could make the same analogy between quantum particles and any classical chaotic or truly random system. -- Stathis Papaioannou From emlynoregan at gmail.com Wed Jan 20 12:53:42 2010 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 20 Jan 2010 23:23:42 +1030 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <580930c21001180256y14fb688fn20b82111216b697d@mail.gmail.com> References: <4B53990B.8010202@rawbw.com> <580930c21001180256y14fb688fn20b82111216b697d@mail.gmail.com> Message-ID: <710b78fc1001200453h4855d3c3p473a05916d46b94@mail.gmail.com> 2010/1/18 Stefano Vaj : > 2010/1/18 Lee Corbin : >> The central question, of course, is whether one would >> *survive* the uploading process. > > I am under the impression that this betrays a more general fear which > probably originates from the inapplicability of evolution-encoded > reactions to novel scenarios. > > For instance, does one survive entire material destruction? The > evolution-encoded answer to that, by extension from massive bodily > harm leading to death, is "obviously not", irrespective of various > consolatory religious theories to the contrary. > > But what about teleport? The truth is that the death of the original > individual and the birth of a copy, or the continued existence of the > former, are both plausible ways of describing an hypothetical event > the nature of which does not change in the least depending on our view > thereof. > > Another classical Gedankenexperiment: what about an operation where my > neurons are replaced one by one by other, functionally equivalent,... > carbon-based neurons, until none remains? Do I die? And when? > > This is why I think that the curious idea that the interesting thing > in organic brains would not be the kind of information processing they > perform and their performance in such task, but some other, undefined > and elusive, quality, is a matter of fear which cannot be overcome > with rational argument. > > -- > Stefano Vaj Right on Stefano, spot on. -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From gts_2000 at yahoo.com Wed Jan 20 12:59:28 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 20 Jan 2010 04:59:28 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <303207.95745.qm@web36508.mail.mud.yahoo.com> --- On Tue, 1/19/10, Stathis Papaioannou wrote: > The "matter thinks" theory of mind can't explain where how > people understand words either. I don't consider myself in possession of any "theory". I just observe that the brain as it exists in nature does not seem to work like a digital computer. We notice a phenomenon and we try to put together hypotheses to explain it. If a given hypothesis fails, we toss it out and keep trying. In this case we notice the phenomenon of consciousness/semantics -- the conscious understanding of words. Hoping to explain the conscious understanding of words and other puzzles, some clever techno-geeks with just enough knowledge of philosophy to be dangerous put together the so-called "computationalist theory of mind". It seemed like a great idea at the time. The running of mental programs might explain the method by which can think but it fails to explain how we *know* about our own thoughts. The brain must then do something else besides run programs. So I toss that hypothesis out as incomplete or wrong and keep trying. Philosophically, to resolve the consciousness/semantics problem the computationalist theory must commit the homunculus fallacy. If the brain equals a digital computer and if the mind equals a program running on that computer then there must exist a homunculus to operate and observe that computer. So the theory fails. -gts From stathisp at gmail.com Wed Jan 20 13:45:07 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 21 Jan 2010 00:45:07 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <303207.95745.qm@web36508.mail.mud.yahoo.com> References: <303207.95745.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/20 Gordon Swobe : > --- On Tue, 1/19/10, Stathis Papaioannou wrote: > >> The "matter thinks" theory of mind can't explain where how >> people understand words either. > > I don't consider myself in possession of any "theory". I just observe that the brain as it exists in nature does not seem to work like a digital computer. > > We notice a phenomenon and we try to put together hypotheses to explain it. If a given hypothesis fails, we toss it out and keep trying. > > In this case we notice the phenomenon of consciousness/semantics -- the conscious understanding of words. > > Hoping to explain the conscious understanding of words and other puzzles, some clever techno-geeks with just enough knowledge of philosophy to be dangerous put together the so-called "computationalist theory of mind". It seemed like a great idea at the time. > > The running of mental programs might explain the method by which can think but it fails to explain how we *know* about our own thoughts. The brain must then do something else besides run programs. So I toss that hypothesis out as incomplete or wrong and keep trying. > > Philosophically, to resolve the consciousness/semantics problem the computationalist theory must commit the homunculus fallacy. If the brain equals a digital computer and if the mind equals a program running on that computer then there must exist a homunculus to operate and observe that computer. So the theory fails. The brain is not organised along the lines of a digital computer, but it is organised along the lines of an information processing system that might evolve naturally, which is copied in artificial neural networks. That's the important part of it, as far as evolution is concerned; consciousness, such as it is, is a side-effect. In any case, you have not presented *any* theory as to what consciousness may be due to if not as a side-effect of information processing. If the NCC is a particular sequence of chemical reactions, why should that be an "explanation"? You could argue that there is no understanding in chemical reactions, it's totally ridiculous, and therefore there must be an imaterial soul of which the chemical reaction is just a physical manifestation. And you could make this argument for *any* physicalist theory, just as you make it for computation. These arguments are mere speculation with no logical or empirical force; it's just an opinion that matter or physical activity or matter specifically engaged in computation cannot give rise to consciousness. But the proof that if consciousness is separable from intelligence then you are forced to an absurd position on what consciousness is still stands, regardless of what theory eventually turns out to be right. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Jan 20 13:52:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 20 Jan 2010 05:52:44 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <835226.32925.qm@web36506.mail.mud.yahoo.com> --- On Tue, 1/19/10, Stathis Papaioannou wrote: >> More on topic: At some level of description almost >> anything can be seen as digital. The high priests of >> computationalism noticed this mundane fact and made a >> religion out of it. They conflate the digital descriptions >> of things with the non-digital things they describe. > > The theory is that it is matter acting in a particular way > that produces intelligence and that consciousness is a > necessary accompaniment of intelligence. Your theory is that matter > acting in a particular way produces intelligence and, independently of > this, it produces consciousness... Your words here don't seem to address the more general comment of mine that you quoted. Consider the ordinary apple that I introduced a few messages ago. I consider natural apples non-digital objects even if we can simulate them on digital computers. I contend that digital simulations of non-digital objects equal nothing more than *descriptions* of things and that we commit an egregious philosophical blunder when we conflate the digital descriptions of non-digital objects with the real objects they describe. Some people seem to think that a digital simulation of an apple somehow equals a real apple; that supposing we find ways to create digital simulations of ourselves along with digital simulations of apples then those simulations of ourselves will actually eat and enjoy the taste of those scrumptious digitally-simulated delicious red apples. -gts From pharos at gmail.com Wed Jan 20 14:43:16 2010 From: pharos at gmail.com (BillK) Date: Wed, 20 Jan 2010 14:43:16 +0000 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <835226.32925.qm@web36506.mail.mud.yahoo.com> References: <835226.32925.qm@web36506.mail.mud.yahoo.com> Message-ID: On 1/20/10, Gordon Swobe wrote: > I contend that digital simulations of non-digital objects equal nothing > more than *descriptions* of things and that we commit an egregious > philosophical blunder when we conflate the digital descriptions of > non-digital objects with the real objects they describe. > > Some people seem to think that a digital simulation of an apple > somehow equals a real apple; that supposing we find ways to create > digital simulations of ourselves along with digital simulations of apples > then those simulations of ourselves will actually eat and enjoy the taste > of those scrumptious digitally-simulated delicious red apples. > Show me a 'real' consciousness! I think it's just a simulation created by our brains, unless you can give me one in a box with ribbon round it. BillK From pharos at gmail.com Wed Jan 20 14:54:47 2010 From: pharos at gmail.com (BillK) Date: Wed, 20 Jan 2010 14:54:47 +0000 Subject: [ExI] massachusetts special senate election In-Reply-To: <98AC07BA70094E42B16C09FEFFCBB81C@spike> References: <20100119120338.miocu8wk3oooskgw@webmail.natasha.cc> <198B1BF0596C40C49F10DF324B56AB25@spike> <98AC07BA70094E42B16C09FEFFCBB81C@spike> Message-ID: On 1/20/10, spike wrote: > We, the libertarians. Ugly result. Almost all the vote went to the two > statist candidates. > > This wasn't a vote about policies. This turnaround in a Democrat state was voter fury and totally illogical. Voting a Republican in won't help. It is just one in the eye for Obama. Quote: A top pollster to Democratic Senate candidate Martha Coakley told HuffPost on Tuesday that the White House, in attempting to blame the Coakley campaign for a potential defeat today in Massachusetts, underestimates the wave of populist fury among Massachusetts voters. Pollster Celinda Lake said Coakley was hampered by the failure of the White House and Congress to confront Wall Street. Lake pointed to polling released by the Economic Policy Institute showing that 65 percent of Americans thought the stimulus served banks interests, 56 percent thought it served corporations and only ten percent that it benefited them. "That is a formula for failure for the Democrats. We have to deliver on economic policies that take on Wall Street and we have to do it for five months, not just five days. We really have to deliver on the policies," she said. --------------- BillK From stefano.vaj at gmail.com Wed Jan 20 16:09:26 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 20 Jan 2010 17:09:26 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> <580930c21001180358r21050d36n57fdbd335267e367@mail.gmail.com> <580930c21001181340l5571a39em2ed9a353a266ed06@mail.gmail.com> Message-ID: <580930c21001200809x64d3af5led6ebfac2b2703fa@mail.gmail.com> 2010/1/18 Stathis Papaioannou : > The standard definition of a philosophical zombie is that it does not > have any consciousness, only intelligent behaviour. Yes, I know, but I would follow Dennett in including in "behaviour", intelligent or otherwise, the kind of physical states, expressions, (self-)declarations, etc. which exhaust the concept of "believing to be conscious", or for that matter "believing X". But I realise that this would not satisfy dualists, who would be ready to admit, for instance, that somebody may be wrong not just in, but also on, his or her actual beliefs, as in "I believe (that I believe (X, but I am wrong, because in fact I do not believe it". The second "believing" being a behaviour, the first being something ineffably "else". -- Stefano Vaj From stefano.vaj at gmail.com Wed Jan 20 16:16:02 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 20 Jan 2010 17:16:02 +0100 Subject: [ExI] "If I can't be me, then I might as well kill myself" In-Reply-To: <4B56941F.3000606@rawbw.com> References: <4B56941F.3000606@rawbw.com> Message-ID: <580930c21001200816l61a36ed3rfce9821c45c4991d@mail.gmail.com> 2010/1/20 Lee Corbin : > Fuld Auditorium, January 9, 2024. Professor Harvey Flockenbird of > Princeton gives a press conference in the wake of his startling > paper---universally heralded by physicists---presenting the first > full and complete Grand Unified Theory of physics. Wonderful!! :-DDD -- Stefano Vaj From jonkc at bellsouth.net Wed Jan 20 16:29:20 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 20 Jan 2010 11:29:20 -0500 Subject: [ExI] "If I can't be me, then I might as well kill myself" In-Reply-To: <4B56941F.3000606@rawbw.com> References: <4B56941F.3000606@rawbw.com> Message-ID: <758B42D2-FB71-4031-9480-348554AB70DE@bellsouth.net> On Jan 20, 2010, at 12:26 AM, Lee Corbin wrote: > Fuld Auditorium, January 9, 2024. Professor Harvey Flockenbird of > Princeton gives a press conference [...] EXCELLENT! Lee you are at the top of your game. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Jan 20 17:49:05 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 20 Jan 2010 18:49:05 +0100 Subject: [ExI] NKS Message-ID: <580930c21001200949q5ccdde08q188616353d481842@mail.gmail.com> For those who are interested in NKS-related events... <> -- Stefano Vaj From stefano.vaj at gmail.com Wed Jan 20 18:00:54 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 20 Jan 2010 19:00:54 +0100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <702579.6131.qm@web36501.mail.mud.yahoo.com> References: <702579.6131.qm@web36501.mail.mud.yahoo.com> Message-ID: <580930c21001201000j211c61der8963a2e063d4b6b6@mail.gmail.com> 2010/1/19 Gordon Swobe : > The computationalist theory of mind, in which the brain is seen as a digital computer running software, does not explain how people can understand their own words. The computationalist theory of PCs, in which digital computers are seen as, well, digital computers, does not explain how PCs can "understand" their own instructions either. The X-theory of organic brains, in which the brain would be processing information in some radically different way from any other "universal computing-level system", does not even show that some (which?) organic brains would do anything special in this respect. Because in fact if "understanding" is taken to mean something so radically ineffable, it is very difficult that any theory can make any sensible use of the concept. -- Stefano Vaj From spike66 at att.net Wed Jan 20 21:26:17 2010 From: spike66 at att.net (spike) Date: Wed, 20 Jan 2010 13:26:17 -0800 Subject: [ExI] massachusetts special senate election In-Reply-To: References: <20100119120338.miocu8wk3oooskgw@webmail.natasha.cc><198B1BF0596C40C49F10DF324B56AB25@spike><98AC07BA70094E42B16C09FEFFCBB81C@spike> Message-ID: <93F708602D8749739366EB4552BB1CC2@spike> > ...On Behalf Of BillK > ... > Voting a Republican in won't help... Ja. In the US we handed a huge amount of political power from one statist party to the other. Nothing changed, or if so it got worse. Now we are starting the process of handing power back to the first statist party, again hoping things will change. I fear they might. > "That is a formula for failure for the Democrats... Democrats, isn't that the outfit that we were told would rule for 40 years? Perhaps it was a typo, he meant 40 weeks: http://www.washingtonexaminer.com/opinion/blogs/beltway-confidential/Good-ne ws-for-GOP-Carville-predicts-Dems-will-rule-for-40-years-44455297.html >...polling ... showing that 65 percent of Americans thought the stimulus served banks interests, 56 percent thought it served corporations and only ten percent that it benefited them... They didn't ask me. I would have said the stimulus actively harmed me, for I was one who had to pay for it. I would further say that scheme didn't benefit banks or corporations in the long run either, but harmed them too. Then I remind the pollster that corporations are made up of people. There are no horses working there, no dogs, no hampsters. And then remind them that the banks are putting to work the money of people; no horses etc deposit money there nor borrow there. Taxing banks equals taxing people who bank there. > ...We really have to deliver on the policies," she said... BillK Ja. I would like to call her and ask her to deliver her polices elsewhere and work towards reducing the size and scope of Government Inc. spike From stathisp at gmail.com Thu Jan 21 00:45:42 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 21 Jan 2010 11:45:42 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <835226.32925.qm@web36506.mail.mud.yahoo.com> References: <835226.32925.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/21 Gordon Swobe : > --- On Tue, 1/19/10, Stathis Papaioannou wrote: > >>> More on topic: At some level of description almost >>> anything can be seen as digital. The high priests of >>> computationalism noticed this mundane fact and made a >>> religion out of it. They conflate the digital descriptions >>> of things with the non-digital things they describe. >> >> The theory is that it is matter acting in a particular way >> that produces intelligence and that consciousness is a >> necessary accompaniment of intelligence. Your theory is that matter >> acting in a particular way produces intelligence and, independently of >> this, it produces consciousness... > > Your words here don't seem to address the more general comment of mine that you quoted. > > Consider the ordinary apple that I introduced a few messages ago. I consider natural apples non-digital objects even if we can simulate them on digital computers. > > I contend that digital simulations of non-digital objects equal nothing more than *descriptions* of things and that we commit an egregious philosophical blunder when we conflate the digital descriptions of non-digital objects with the real objects they describe. > > Some people seem to think that a digital simulation of an apple somehow equals a real apple; that supposing we find ways to create digital simulations of ourselves along with digital simulations of apples then those simulations of ourselves will actually eat and enjoy the taste of those scrumptious digitally-simulated delicious red apples. The problem is not that you make these assertions but that you make them with a show of such overwhelming confidence, dismissing any counterarguments without rebutting them. What do you say to someone who confidently asserts that matter *cannot* give rise to thought, so the mind must be due to a magical immaterial substance? Whatever you say, they will just keep repeating that matter *cannot* give rise to thought. It's obvious. A chair can't think; a glass of water with chemicals in it can't think; and if you believe a bunch of chemicals in your head can think, you're just deluded. That's what you keep doing with your argument. For example, I have *assumed* that you are right and shown that it leads to the possibility of conscious zombies, which you agree are absurd. But you don't adjust your position, or attempt to show that it does not in fact lead to this absurdity, or even that it is not an absurdity (as I think Lee Corbin was saying). You just keep repeating that computers can't think, as if any criticisms are a priori a waste of time. And you accuse others of holding religious beliefs! -- Stathis Papaioannou From lcorbin at rawbw.com Thu Jan 21 01:45:16 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Wed, 20 Jan 2010 17:45:16 -0800 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> Message-ID: <4B57B1AC.2060906@rawbw.com> Stathis Papaioannou wrote: >> I believe that this is how those who believe in qualia >> use the terms. But to my surprise, you seem to believe >> that it makes sense to talk of qualia. You do? > > Qualia, consciousness, subjectivity, experience, intentionality, > understanding: What a ridiculous question! Of course I believe in > these things! So do you! So does everyone who is able to believe > anything! The "Q" word is especially dangerous. It's a reification into a noun that causes many people to actually *look* for a physiological manifestation. We're a lot better off without it. Unlike those other, relatively unobjectionable terms, this one arose strictly in the course of armchair philosophy dabbling. > But I also believe that they are related to the information > processing that goes on in my brain in the same way that raising my > arm is related to contraction of my deltoid muscle causing abduction > of my humerus. To be sure. >> While I find it *conceivable* that there could be >> zombies, I consider it ridiculous, for many reasons, >> chief among them is that nature presumably could >> have economized by turning out zombies instead of >> us. > > The philosophical argument turns on the meaning of the term > "conceivable". Chalmers says that zombies are conceivable, but > probably physically impossible. Well, then, thanks for the warning about another word I should avoid. And I can see from the subject line of this thread that I too was picking nits. Lee > Searle says they are both conceivable > and physically possible. Dennett says they are not even conceivable, > that is, the idea leads to a logical contradiction. I tend to agree > with Dennett. > > From stathisp at gmail.com Thu Jan 21 02:00:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 21 Jan 2010 13:00:27 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <4B57B1AC.2060906@rawbw.com> References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> Message-ID: 2010/1/21 Lee Corbin : >> Qualia, consciousness, subjectivity, experience, intentionality, >> understanding: What a ridiculous question! Of course I believe in >> these things! So do you! So does everyone who is able to believe >> anything! > > The "Q" word is especially dangerous. It's a reification > into a noun that causes many people to actually *look* > for a physiological manifestation. We're a lot better off > without it. Unlike those other, relatively unobjectionable > terms, this one arose strictly in the course of armchair > philosophy dabbling. I don't see why "qualia" is used at all, but I take it as synonymous with "experiences". Experiences are thought to be "real", and to have a physiological basis. A zombie is supposed to have the behaviour but not the experiences. -- Stathis Papaioannou From emlynoregan at gmail.com Thu Jan 21 03:28:54 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 21 Jan 2010 13:58:54 +1030 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> Message-ID: <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> 2010/1/21 Stathis Papaioannou : > 2010/1/21 Lee Corbin : > >>> Qualia, consciousness, subjectivity, experience, intentionality, >>> understanding: What a ridiculous question! Of course I believe in >>> these things! So do you! So does everyone who is able to believe >>> anything! >> >> The "Q" word is especially dangerous. It's a reification >> into a noun that causes many people to actually *look* >> for a physiological manifestation. We're a lot better off >> without it. Unlike those other, relatively unobjectionable >> terms, this one arose strictly in the course of armchair >> philosophy dabbling. > > I don't see why "qualia" is used at all, but I take it as synonymous > with "experiences". Experiences are thought to be "real", and to have > a physiological basis. A zombie is supposed to have the behaviour but > not the experiences. I think Qualia really means emotions, or feelings. When people talk about the "redness of red", they mean the feeling of it. And really, that you feel anger, or happiness, in a subjective, experiencing kind of way, is exactly as mysterious as the feeling of red. Someone earlier was talking about why we might have these subjective, experienced feelings at all, why would evolution use them? I think that, whatever they are, they are part of a system developed back when the brain was much poorer at information processing, in much simpler organisms, and that system is still there, even though if you were to start from scratch without it, you could probably make a fitter organism. We're made up of all kinds of things like that. -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From spike66 at att.net Thu Jan 21 03:20:26 2010 From: spike66 at att.net (spike) Date: Wed, 20 Jan 2010 19:20:26 -0800 Subject: [ExI] separated at birth? (sorry couldn't resist) Message-ID: <4650F17F2E264B828808D74CB62D3AEB@spike> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: algore.jpg Type: image/jpeg Size: 5500 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jabbarotj.jpg Type: image/jpeg Size: 44141 bytes Desc: not available URL: From stathisp at gmail.com Thu Jan 21 04:08:13 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 21 Jan 2010 15:08:13 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> Message-ID: 2010/1/21 Emlyn : > Someone earlier was talking about why we might have these subjective, > experienced feelings at all, why would evolution use them? I think > that, whatever they are, they are part of a system developed back when > the brain was much poorer at information processing, in much simpler > organisms, and that system is still there, even though if you were to > start from scratch without it, you could probably make a fitter > organism. We're made up of all kinds of things like that. Do you really think it would be easier to make a device with subjectivity than without? I think it is more likely that the subjectivity is a necessary side-effect of the information processing underpinning it. -- Stathis Papaioannou From olga.bourlin at gmail.com Thu Jan 21 04:23:47 2010 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Wed, 20 Jan 2010 20:23:47 -0800 Subject: [ExI] separated at birth? (sorry couldn't resist) In-Reply-To: <4650F17F2E264B828808D74CB62D3AEB@spike> References: <4650F17F2E264B828808D74CB62D3AEB@spike> Message-ID: Nah ... Jabba reminds me of Rush Limbaugh! Olga 2010/1/20 spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jabbarotj.jpg Type: image/jpeg Size: 44141 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: algore.jpg Type: image/jpeg Size: 5500 bytes Desc: not available URL: From emlynoregan at gmail.com Thu Jan 21 04:46:26 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 21 Jan 2010 15:16:26 +1030 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> Message-ID: <710b78fc1001202046n9d17f8ci8ad5c3a7a377b078@mail.gmail.com> 2010/1/21 Stathis Papaioannou : > 2010/1/21 Emlyn : > >> Someone earlier was talking about why we might have these subjective, >> experienced feelings at all, why would evolution use them? I think >> that, whatever they are, they are part of a system developed back when >> the brain was much poorer at information processing, in much simpler >> organisms, and that system is still there, even though if you were to >> start from scratch without it, you could probably make a fitter >> organism. We're made up of all kinds of things like that. > > Do you really think it would be easier to make a device with > subjectivity than without? I think it is more likely that the > subjectivity is a necessary side-effect of the information processing > underpinning it. I don't know how subjective experience works at all. But notice that we don't use feelings about things for any higher order cognitive tasks; it's always used in "gut" (ie: instinctive) reactions/decisions, broad sweeping heuristic changes to other parts of the brains (eg: Angry! Give more weight to aggressive/punitive measures!), simple fast decision making (hurts! pull away!). When I think closely about what my subjective experience is, sans all the information processing, I find there's just not much left for it to do, except that it feels things (including "qualia"), and communicates the results of that to the rest of the brain. How does that happen? Buggered if I know. Why not just "simulate" feeling, in an information processing kind of way? You've got me there, that's exactly what I would do if I had to replicate it. If you severely degraded our ability to think abstractly, our sophisticated memory, our language abilities, other higher order cognitive processing (?), what would the resulting brain function like? We'd still have emotions, "qualia", pain/pleasure. We'd behave just like less sophisticated animals do. And animals do behave as though they feel things. I am convinced that they have first person subjective experience of the world, like we do (but without the higher order stuff that we have to think about that). I guess what I'm getting at is, I don't see any reason that higher abstract reasoning and subjective first person experience are related. However that experience works, I don't think it's new to the most intelligent animals; I think it's old, and its purpose is to help drive an unsophisticated organism through the world. This begs the question, why have subjective experience, when this should be simple wiring stuff? Well, I don't know. Why make a bird flap its wings to fly, that's *hard*. But it was the way natural selection found to do it. Just like natural selection didn't manage to invent spinning motors or jet engines, it also didn't invent the digital computer. Instead, it invented something we have a hard time even describing, the first person subjective experience "module", and tacked that to other clever hardware to make early brains. However it works, it's entirely function, it does things, so it's part of the natural world of course. Fascinating! -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From ablainey at aol.com Thu Jan 21 04:59:11 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 20 Jan 2010 23:59:11 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: References: <4650F17F2E264B828808D74CB62D3AEB@spike> Message-ID: <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> Hi Everyone. Just thought i'd stop lurking to give you a link to a video I have just put on Youtube. I have just got my hands on an EPOC headset. Which, for those that don't know. It reads various thought patterns which can be interpreted and used to trigger keyboard events etc. I have cobbled together a 5 axis robot arm, switchboxes and a quick bit of software to read the keyboard event which are output from the Epoc and the result is a brain controlled robot arm. I don't need to state the implications. http://www.youtube.com/watch?v=4Cq35VbRpTY Leave a comment, or let me know if you want to replicate the set up for your self. All the best Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Thu Jan 21 05:02:04 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 20 Jan 2010 23:02:04 -0600 Subject: [ExI] EPOC EEG headset In-Reply-To: <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> Message-ID: <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> 2010/1/20 : > Hi Everyone. > Just thought i'd stop lurking to give you a link to a video I have just put > on Youtube. I have just got my hands on an EPOC headset. Which, for those > that don't know. It reads various thought patterns which can be interpreted > and used to trigger keyboard events etc. I have cobbled together a 5 axis > robot arm, switchboxes and a quick bit of software to read the keyboard > event which are output from the Epoc and the result is a brain controlled > robot arm. > I don't need to state the implications. > > http://www.youtube.com/watch?v=4Cq35VbRpTY > > Leave a comment, or let me know if you want to replicate the set up for your > self. Thanks for that, Alex. Can you please take a typing test and report what sort of wpm you're able to achieve? - Bryan http://heybryan.org/ 1 512 203 0507 From spike66 at att.net Thu Jan 21 05:34:18 2010 From: spike66 at att.net (spike) Date: Wed, 20 Jan 2010 21:34:18 -0800 Subject: [ExI] EPOC EEG headset In-Reply-To: <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> Message-ID: <816ABDBA8A5A4F3AB07B7525C63D4A95@spike> >...On Behalf Of ablainey at aol.com >...It reads various thought patterns which can be interpreted and used to trigger keyboard events etc... This reminds me of STNG's Councilor Deanna Troi's ability to read minds to some extent with that whole betazoid thing she had going. Of course being drop dead gorgeous, most of what she read must have been something along the lines of: hammina hammina hammina cool me off cleavage pant pant melt... >...I have cobbled together a 5 axis robot arm, switchboxes and a quick bit of software to read the keyboard event which are output from the Epoc and the result is a brain controlled robot arm. I don't need to state the implications. http://www.youtube.com/watch?v=4Cq35VbRpTY ... Alex This is wicked cool Alex! Congratulations! spike From ablainey at aol.com Thu Jan 21 06:40:09 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 21 Jan 2010 01:40:09 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <816ABDBA8A5A4F3AB07B7525C63D4A95@spike> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <816ABDBA8A5A4F3AB07B7525C63D4A95@spike> Message-ID: <8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com> Thanks Spike. My brain is busting with ideas for the Epoc (My RoboMow's will be next !! LOL). The possibilities for the technology are pretty much endless. Now all we need is feedback direct to the brain. Alex -----Original Message----- From: spike >...On Behalf Of ablainey at aol.com >...It reads various thought patterns which can be interpreted and used to trigger keyboard events etc... This reminds me of STNG's Councilor Deanna Troi's ability to read minds to some extent with that whole betazoid thing she had going. Of course being drop dead gorgeous, most of what she read must have been something along the lines of: hammina hammina hammina cool me off cleavage pant pant melt... >...I have cobbled together a 5 axis robot arm, switchboxes and a quick bit of software to read the keyboard event which are output from the Epoc and the result is a brain controlled robot arm. I don't need to state the implications. http://www.youtube.com/watch?v=4Cq35VbRpTY ... Alex This is wicked cool Alex! Congratulations! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Jan 21 10:51:07 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 21 Jan 2010 11:51:07 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> Message-ID: <580930c21001210251v47b8326br1470361cf19f9a0f@mail.gmail.com> 2010/1/20 Stathis Papaioannou : > Qualia, consciousness, subjectivity, experience, intentionality, > understanding: What a ridiculous question! Of course I believe in > these things! So do you! So does everyone who is able to believe > anything! Really? :-) Of course such words have a meaning and usefully indicate something (why, with the likely exception of "qualia"). So do the words "blind watchmaker", "popular will", "avian sexual drive", "serendipity", "artistic value", "communism", "compound interest", "epistemology", "Murphy law", "independence", etc. An entirely different thing is the faith that that they would not simply refer to our way of describing the world but some essence-like, irreductible phenomena you can tap your knock on. -- Stefano Vaj From stathisp at gmail.com Thu Jan 21 11:33:44 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 21 Jan 2010 22:33:44 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <710b78fc1001202046n9d17f8ci8ad5c3a7a377b078@mail.gmail.com> References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> <710b78fc1001202046n9d17f8ci8ad5c3a7a377b078@mail.gmail.com> Message-ID: 2010/1/21 Emlyn : >> Do you really think it would be easier to make a device with >> subjectivity than without? I think it is more likely that the >> subjectivity is a necessary side-effect of the information processing >> underpinning it. > > I don't know how subjective experience works at all. But notice that > we don't use feelings about things for any higher order cognitive > tasks; it's always used in "gut" (ie: instinctive) > reactions/decisions, broad sweeping heuristic changes to other parts > of the brains (eg: Angry! Give more weight to aggressive/punitive > measures!), simple fast decision making (hurts! pull away!). You may have a narrower understanding of "feelings" or "qualia" than I do. Every cognitive task involves a feeling, insofar as I am aware that I am doing it. If I am counting, I am aware of the feeling of counting. Computers can count, but I really doubt that they have anything like my feeling. When I count, I sort of have a grand view of all the numbers yet to come, an awareness that I am counting and the reason I am counting, a vision of each number as it is would appear written down, an association with significant numbers from other aspects of my life, and so on. A computer does not do these things, because they are for the most part not necessary for the task. However, a conscious computer would reflect on its actions in this way, except perhaps when it devoted a subroutine to a mundane task, equivalent to a human subconsciously digesting his food. > When I think closely about what my subjective experience is, sans all > the information processing, I find there's just not much left for it > to do, except that it feels things (including "qualia"), and > communicates the results of that to the rest of the brain. How does > that happen? Buggered if I know. Why not just "simulate" feeling, in > an information processing kind of way? You've got me there, that's > exactly what I would do if I had to replicate it. How do you know that simulating feelings won't actually produce feelings? It could be that the feeling is *nothing more* than the system observing itself observe, something along the lines of what Jef has been saying. If this is so, then it would be impossible to make a zombie. People are still tempted to say that it is *conceivable* to make a zombie, but maybe conceivable is too loose a word. It is "conceivable" that 13 is not prime, but that doesn't mean much, since 13 is definitely prime in all possible worlds. Perhaps if we understood cognition well enough its necessary association with consciousness would be as clear as the necessary association between 13 and primeness. -- Stathis Papaioannou From emlynoregan at gmail.com Thu Jan 21 12:03:20 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 21 Jan 2010 22:33:20 +1030 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> <710b78fc1001202046n9d17f8ci8ad5c3a7a377b078@mail.gmail.com> Message-ID: <710b78fc1001210403l2ad85e8fg7e6ef74d63dce7be@mail.gmail.com> 2010/1/21 Stathis Papaioannou : > 2010/1/21 Emlyn : > >>> Do you really think it would be easier to make a device with >>> subjectivity than without? I think it is more likely that the >>> subjectivity is a necessary side-effect of the information processing >>> underpinning it. >> >> I don't know how subjective experience works at all. But notice that >> we don't use feelings about things for any higher order cognitive >> tasks; it's always used in "gut" (ie: instinctive) >> reactions/decisions, broad sweeping heuristic changes to other parts >> of the brains (eg: Angry! Give more weight to aggressive/punitive >> measures!), simple fast decision making (hurts! pull away!). > > You may have a narrower understanding of "feelings" or "qualia" than I > do. Every cognitive task involves a feeling, insofar as I am aware > that I am doing it. "Insofar as I am aware that I am doing it". Exactly! What is it, to be aware of what you are doing? It is to feel. Yet, if you were to compare how you think, even through an abstract, multi-step problem, you'll find that it's nothing like a computer might do it. You have access to states in the thought process, but not the program which is creating that process. If you think about it, most of your thoughts are only loosely connected; to have strict derivation from the start to an end of a train of thought really requires us to carefully write down everything, interpolate missing pieces after the fact, tease out assumptions, and so on. What that tells me is that, while the aware/experiencing "you" is a definite part of the process (intricately involved in kicking off unrelated directions of inquiry, as you say below), it's by no means most of it, or even the most important part. I think, rather than claiming that "you" originate your thoughts, in fact you merely receive them from elsewhere (newer, cleverer parts of the cortex I guess), like they are written on a slate, and you read them off, mistaking them for your own creations. > If I am counting, I am aware of the feeling of > counting. Computers can count, but I really doubt that they have > anything like my feeling. Also, *how do you count*? I can tell you how a computer counts, down to the electrical signals in the hardware, but we cannot by inspection know very much about how we think when we do really abstract stuff. Well in fact I think counting is probably largely sequential recall for small numbers, and clunky execution of an algorithm for larger numbers, but that indeed is where the mechanism gets hazy, don't you think? > When I count, I sort of have a grand view of > all the numbers yet to come, an awareness that I am counting and the > reason I am counting, a vision of each number as it is would appear > written down, an association with significant numbers from other > aspects of my life, and so on. A computer does not do these things, > because they are for the most part not necessary for the task. > However, a conscious computer would reflect on its actions in this > way, except perhaps when it devoted a subroutine to a mundane task, > equivalent to a human subconsciously digesting his food. Don't you find it suspicious that the pieces of your cognition that are least relevant to an abstract task (like arithmetic) are the ones you most readily feel and experience? I think we have many of these "unnecessary" experiences with abstract thought is because the part of our brains which is most subjectively "us" is very poor at the kind of linear, algorithmic work that is involved, and in fact is probably not really doing that work, it's doing something else - free associating? > >> When I think closely about what my subjective experience is, sans all >> the information processing, I find there's just not much left for it >> to do, except that it feels things (including "qualia"), and >> communicates the results of that to the rest of the brain. How does >> that happen? Buggered if I know. Why not just "simulate" feeling, in >> an information processing kind of way? You've got me there, that's >> exactly what I would do if I had to replicate it. > > How do you know that simulating feelings won't actually produce > feelings? I don't know that, I wont claim to. > It could be that the feeling is *nothing more* than the > system observing itself observe, something along the lines of what Jef > has been saying. People have been waving their hands in this direction for years; that our subjective awareness is the result of us being conscious of being conscious in a tightening loop that magically produces us. Maybe. I don't think that's right though. I say this because I can imagine subjective awareness without the machinery needed to be "self aware" in the sense that humans are; or at least without being aware of being aware. That's why, for instance, I think many animals (not just dolphins/monkeys/octopii/et al) are conscious. > If this is so, then it would be impossible to make a > zombie. People are still tempted to say that it is *conceivable* to > make a zombie, but maybe conceivable is too loose a word. It is > "conceivable" that 13 is not prime, but that doesn't mean much, since > 13 is definitely prime in all possible worlds. Perhaps if we > understood cognition well enough its necessary association with > consciousness would be as clear as the necessary association between > 13 and primeness. Well, I can't see why we couldn't make an AI which was a "zombie". And, that AI could easily be "self aware" in a sense (ie: have a model of itself in the world, have a model of itself as being an entity with a model of itself in the world, etc), without having a sense of subjective experience. It'd be easy to determine too; it just wouldn't understand what you meant by subjective experience. -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From gts_2000 at yahoo.com Thu Jan 21 12:16:41 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 21 Jan 2010 04:16:41 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <335652.69770.qm@web36502.mail.mud.yahoo.com> --- On Wed, 1/20/10, Stathis Papaioannou wrote: > The brain is not organised along the lines of a digital > computer, but it is organised along the lines of an information > processing system The brain does much more than merely process information. Your pocket calculator processes information but I think you'll agree it does not have conscious experience. > you have not presented *any* theory as to what consciousness may > be due to.. I do not pretend to have all the answers to the mysteries of the brain and I look askance at anyone in 2010 who does. I do however have some opinions about how not to explain the brain. > ... if not as a side-effect of information processing. I reject that idea as written because consciousness does not seem to me merely a "side-effect" of information processing. Clearly our conscious awareness of information processing plays a very important role in that process. And this is where the computationalist theory of mind falls on its face: it can try to explain information processing, but it cannot explain our conscious awareness of information processing, i.e., it cannot explain semantics. > If the NCC is a particular sequence of chemical reactions, why > should that be an "explanation"? If and when we come to know everything about the neural correlates of consciousness then we will know everything we can know about how the brain becomes conscious and has conscious experiences. If that does not seem satisfactory to you then you might look at the reasons why. Perhaps you hold consciously or unconsciously to the doctrine of mind/matter duality, such that you think mental phenomena must in some way exist separate from the matter of the brain. I don't hold to that view. I think mental phenomena (conscious thoughts, beliefs, desires, and so on) exist as high level processes of the physical brain. In my view every conscious thought or emotion has a physical correlate in the brain. In 2010 we can change our perceptions of pain with pain-killers that alter the physics of our nervous systems. I see no reason we should not expect one day to alter our beliefs and desires the same way. -gts From gts_2000 at yahoo.com Thu Jan 21 12:18:55 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 21 Jan 2010 04:18:55 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <580930c21001201000j211c61der8963a2e063d4b6b6@mail.gmail.com> Message-ID: <237573.65849.qm@web36502.mail.mud.yahoo.com> --- On Wed, 1/20/10, Stefano Vaj wrote: >> The computationalist theory of mind, in which the >> brain is seen as a digital computer running software, does >> not explain how people can understand their own words. > > The computationalist theory of PCs, in which digital > computers are seen as, well, digital computers, does not > explain how PCs can "understand" their own instructions either. Right, and strictly speaking they don't. -gts From stathisp at gmail.com Thu Jan 21 12:53:00 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 21 Jan 2010 23:53:00 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <710b78fc1001210403l2ad85e8fg7e6ef74d63dce7be@mail.gmail.com> References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> <710b78fc1001202046n9d17f8ci8ad5c3a7a377b078@mail.gmail.com> <710b78fc1001210403l2ad85e8fg7e6ef74d63dce7be@mail.gmail.com> Message-ID: 2010/1/21 Emlyn : > Yet, if you were to compare how you think, even through an abstract, > multi-step problem, you'll find that it's nothing like a computer > might do it. You have access to states in the thought process, but not > the program which is creating that process. If you think about it, > most of your thoughts are only loosely connected; to have strict > derivation from the start to an end of a train of thought really > requires us to carefully write down everything, interpolate missing > pieces after the fact, tease out assumptions, and so on. Presumably this is just because of the haphazard way evolution put the brain together. > What that tells me is that, while the aware/experiencing "you" is a > definite part of the process (intricately involved in kicking off > unrelated directions of inquiry, as you say below), it's by no means > most of it, or even the most important part. I think, rather than > claiming that "you" originate your thoughts, in fact you merely > receive them from elsewhere (newer, cleverer parts of the cortex I > guess), like they are written on a slate, and you read them off, > mistaking them for your own creations. It seems that the newer, cleverer parts of the brain are the ones whose work we are most aware of. The oldest parts of the nervous system phylogenetically are probably things like the ganglions regulating gut motility, and we are not aware of the thinking, such as it is, that those do. The parts of the cortex perhaps unique to humans involve language and abstract thought, modelling ourselves as entities in the world, and these things require a lot of self awareness. > Don't you find it suspicious that the pieces of your cognition that > are least relevant to an abstract task (like arithmetic) are the ones > you most readily feel and experience? I think we have many of these > "unnecessary" experiences with abstract thought is because the part of > our brains which is most subjectively "us" is very poor at the kind of > linear, algorithmic work that is involved, and in fact is probably not > really doing that work, it's doing something else - free associating? The brain does a lot of very complex calculations in, for example, visual processing, which we are completely unaware of. We are only aware of the final result. But perhaps it is correct to say that the final result *is* the awareness of the processing in aggregate, seeing the whole picture and recognising it rather than looking at each individual pixel. > Well, I can't see why we couldn't make an AI which was a "zombie". > And, that AI could easily be "self aware" in a sense (ie: have a model > of itself in the world, have a model of itself as being an entity with > a model of itself in the world, etc), without having a sense of > subjective experience. It'd be easy to determine too; it just wouldn't > understand what you meant by subjective experience. What we call AI's today may be zombies, but I am not sure that an AI that could have prolonged contact with humans and fool them into thinking it was one of them could be a zombie. Daniel Dennett has argued that it would have to have zombie beliefs etc. which would be indistinguishable from real beliefs. And David Chalmers' Fading Qualia argument, which I have described at length, implies that if zombies are possible then we might all be zombies and not realise it, suggesting that there is no logical distinction between a conscious being and its zombie equivalent. -- Stathis Papaioannou From stathisp at gmail.com Thu Jan 21 13:11:32 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 22 Jan 2010 00:11:32 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <335652.69770.qm@web36502.mail.mud.yahoo.com> References: <335652.69770.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/21 Gordon Swobe : >> If the NCC is a particular sequence of chemical reactions, why >> should that be an "explanation"? > > If and when we come to know everything about the neural correlates of consciousness then we will know everything we can know about how the brain becomes conscious and has conscious experiences. If that does not seem satisfactory to you then you might look at the reasons why. Perhaps you hold consciously or unconsciously to the doctrine of mind/matter duality, such that you think mental phenomena must in some way exist separate from the matter of the brain. I don't hold to that view. I think mental phenomena (conscious thoughts, beliefs, desires, and so on) exist as high level processes of the physical brain. > > In my view every conscious thought or emotion has a physical correlate in the brain. In 2010 we can change our perceptions of pain with pain-killers that alter the physics of our nervous systems. I see no reason we should not expect one day to alter our beliefs and desires the same way. But finding a physical correlate does not provide an "explanation". I can stubbornly point out that there is no logical pathway from a lump of matter to meaning, even if there is an apparent correlation. This is at least as convincing as your assertion that syntax can't produce meaning. And it's not obvious that matter is capable of syntax either, unless it is organised as an information-processing machine. All you can do then is point to the brain and say, but there is the proof, it thinks, you just have to accept it as a raw fact. So why can't someone point to a computer and say the same thing, even supposing that they are no more capable of explaining how the computer produces meaning than you are capable with your advanced future understanding of neuroscience of explaining how matter produces meaning? -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 21 13:25:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 21 Jan 2010 05:25:36 -0800 (PST) Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: Message-ID: <610150.14412.qm@web36508.mail.mud.yahoo.com> --- On Thu, 1/21/10, Stathis Papaioannou wrote: > And David Chalmers' Fading Qualia argument, which I have described at > length, implies that if zombies are possible then we might all be > zombies and not realise it, Chalmers just likes to play with words. As I mentioned once in past, his hard problem book leaves us with more questions than we started with. If philosophical zombies can exist then by definition they can exist only with no mental contents, i.e., with no conscious intentionality -- with no conscious beliefs or realizations about anything whatsoever. If you have conscious awareness of such things going on in your head then you know you're not a zombie. -gts From gts_2000 at yahoo.com Thu Jan 21 13:01:42 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 21 Jan 2010 05:01:42 -0800 (PST) Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: Message-ID: <758269.36155.qm@web36505.mail.mud.yahoo.com> --- On Thu, 1/21/10, Stathis Papaioannou wrote: > Every cognitive task involves a feeling, insofar as I > am aware that I am doing it. If I am counting, I am aware of the > feeling of counting. Computers can count, but I really doubt that they > have anything like my feeling. When I count, I sort of have a > grand view of all the numbers yet to come, an awareness that I am > counting and the reason I am counting, a vision of each number as it is > would appear written down, an association with significant numbers from > other aspects of my life, and so on. A computer does not do these > things, because they are for the most part not necessary for the > task. However, a conscious computer would reflect on its actions > in this way.. Excellent. We have essentially the same ideas about the issues even if we don't agree. Some people seem to deny the existence of consciousness and thus their own experiences of life in what look to me like vain attempts to escape the conclusion that humans might have something computers do not have. I don't have much to say to them. I start with the observation that I have conscious experiences of having a life. I figured that much out back on the day the doctor spanked my butt in the maternity ward. -gts From gts_2000 at yahoo.com Thu Jan 21 13:43:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 21 Jan 2010 05:43:59 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <6646.24698.qm@web36508.mail.mud.yahoo.com> --- On Thu, 1/21/10, Stathis Papaioannou wrote: > But finding a physical correlate does not provide an > "explanation". I can stubbornly point out that there is no logical > pathway from a lump of matter to meaning, even if there is an apparent > correlation. To say "there is no logical pathway from a lump of matter to meaning" is equivalent to saying that mind and matter exist in separate realms. It seems then that you really do want to espouse the mind/matter dualism handed down to us from Descartes. > This is at least as convincing as your assertion that syntax > can't produce meaning. That's just strictly logical argument. You don't like it, but it remains nevertheless true that the man in the chinese room cannot understand the meanings of the symbols merely from manipulating them according to syntactic rules the way computers actually do. At least you (and nobody else) have not shown how that miracle can happen. > All you can do then is point to the brain and say, but there is the > proof, it thinks, you just have to accept it as a raw fact. Yes. That's all I can do. > So why can't someone point to a computer and say the same thing People can point all they want, but they need to explain how a program and its hardware can get semantics from the syntactic rules programmed into the machine by the programmer. We don't know exactly how the natural brain does it, but it sure looks like it cannot do it *that* way. -gts From stefano.vaj at gmail.com Thu Jan 21 14:43:46 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 21 Jan 2010 15:43:46 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> <710b78fc1001202046n9d17f8ci8ad5c3a7a377b078@mail.gmail.com> Message-ID: <580930c21001210643x63117315s9f0063d6cc5a1d2b@mail.gmail.com> 2010/1/21 Stathis Papaioannou : > Computers can count, but I really doubt that they have > anything like my feeling. Yes, but this is also true for fruitflies (organic brains, no doubt), bats (mammals!), chimps, men informed by a different worldview and cultural background, and ultimately even member of your family, whose "feelings" you would never be able to experience first-hand, All that is therefore materially irrelevant for the discussion "organic brains deal with information in ways that are qualitatively different from other systems". -- Stefano Vaj From jonkc at bellsouth.net Thu Jan 21 18:35:42 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 21 Jan 2010 13:35:42 -0500 Subject: [ExI] digital simulations, descriptions and copies. In-Reply-To: <335652.69770.qm@web36502.mail.mud.yahoo.com> References: <335652.69770.qm@web36502.mail.mud.yahoo.com> Message-ID: <17940E88-6828-477B-9C82-988FAEAAA195@bellsouth.net> By 8:30 am on Jan 21, 2010 Gordon Swobe had written 4 posts, I shall write one. > Some people seem to deny the existence of consciousness Nobody on this side of a looney bin does. > The brain does much more than merely process information. It also does something that the Scientific Method cannot detect, the name of that "something" has slipped my mind but I think it starts with the letter "S". > Your pocket calculator processes information but I think you'll agree it does not have conscious experience. Why would I agree with that, in fact why would you agree that a rock does not have conscious experience? True, rocks don't act like they are aware, but according to you behavior tells us nothing about inner awareness. > I reject that idea as written because consciousness does not seem to me merely a "side-effect" of information processing. Yes I know you reject that because it just doesn't seem right to you, you can't explain why it doesn't, it just doesn't. You're not alone, lots of people reject things even though there is a mountain of data in support of it and accept things when there is no data at all to support it. I however prefer Science. > I think mental phenomena (conscious thoughts, beliefs, desires, and so on) exist as high level processes of the physical brain. But you can give no explanation of how Evolution produced these things. Well OK that's not entirely true, you did give a stab at explaining it, something like that on Monday Wednesday and Friday consciousness effects behavior enough for Evolution to produce it, but on Tuesday Thursday and Saturday consciousness effects behavior too little for the Turing Test to work, and on Sunday you're just a bit confused. > If philosophical zombies can exist [...] Then Darwin's Theory of Evolution is wrong. > this is where the computationalist theory of mind falls on its face: it can try to explain information processing, but it cannot explain our conscious awareness of information processing, i.e., it cannot explain semantics. But you can't even explain what the terms mean. Earlier you said that even humans can't get semantics from syntax. Do you care to explain that little gem? Probably not, in the past when anybody pointed out a contradiction in your ideas you just ignore the difficulty. Doublethink in action. > Perhaps you hold consciously or unconsciously to the doctrine of mind/matter duality, such that you think mental phenomena must in some way exist separate from the matter of the brain. They are separate only in the way that verbs and adjectives are separate from nouns. Not very mystical. > > In my view every conscious thought or emotion has a physical correlate in the brain. In the brain? Why not in the neuron? You have said that signals between neurons are not involved in consciousness, so they can't cooperate to generate it, so one neuron will have to do even though you can't explain why there aren't 100 billion conscious beings inside your head. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Thu Jan 21 19:35:00 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 21 Jan 2010 13:35:00 -0600 Subject: [ExI] Fwd: Bay Area RepRap Meetup Tuesday Jan 26 @ 7:00 PM @ TechShop Message-ID: <55ad6af71001211135q7fa553b4j79e911fc7bf076ec@mail.gmail.com> ---------- Forwarded message ---------- From: J.R. Warmkessel Date: Thu, Jan 21, 2010 at 12:53 PM Subject: RepRap Meetup Tuesday Jan 26 @ 7:00 PM @ Tech shop To: Bay Area RepRap Hi all, I am please to announce the next Bay Area Rep Rap meeting. ?It is scheduled for Tuesday Jan 26 @ 7:00 Pm @ Tech shop You do not have to be a member of the Tech Shop to attend out meet up. I look forward to seeing you all there J.R. Warmkessel -- You received this message because you are subscribed to the Google Groups "Bay Area RepRap" group. To post to this group, send email to bay-area-reprap at googlegroups.com. To unsubscribe from this group, send email to bay-area-reprap+unsubscribe at googlegroups.com. For more options, visit this group at http://groups.google.com/group/bay-area-reprap?hl=en. -- - Bryan http://heybryan.org/ 1 512 203 0507 From stathisp at gmail.com Thu Jan 21 22:41:37 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 22 Jan 2010 09:41:37 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <6646.24698.qm@web36508.mail.mud.yahoo.com> References: <6646.24698.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/22 Gordon Swobe : > --- On Thu, 1/21/10, Stathis Papaioannou wrote: > >> But finding a physical correlate does not provide an >> "explanation". I can stubbornly point out that there is no logical >> pathway from a lump of matter to meaning, even if there is an apparent >> correlation. > > To say "there is no logical pathway from a lump of matter to meaning" is equivalent to saying that mind and matter exist in separate realms. It seems then that you really do want to espouse the mind/matter dualism handed down to us from Descartes. I'm saying this to show where your assertion that syntax can't produce meaning leads. >> This is at least as convincing as your assertion that syntax >> can't produce meaning. > > That's just strictly logical argument. You don't like it, but it remains nevertheless true that the man in the chinese room cannot understand the meanings of the symbols merely from manipulating them according to syntactic rules the way computers actually do. At least you (and nobody else) have not shown how that miracle can happen. It also remains strictly true that a lump of matter cannot produce meaning. Put a whole *mountain* of matter in a room and talk to it in Chinese for a million years. Will it understand Chinese? No it won't! So how can organising the matter in a special way, whether in a brain or in a computer, produce meaning when the meaning just isn't there to begin with? >> All you can do then is point to the brain and say, but there is the >> proof, it thinks, you just have to accept it as a raw fact. > > Yes. That's all I can do. > >> So why can't someone point to a computer and say the same thing > > People can point all they want, but they need to explain how a program and its hardware can get semantics from the syntactic rules programmed into the machine by the programmer. We don't know exactly how the natural brain does it, but it sure looks like it cannot do it *that* way. As I and others have said numerous times, it's quite obvious that meaning could *only* come from the association of one symbol or input with another symbol or input. But in case you still don't accept that, and if you are not bothered by saying that dumb matter acquires understanding even though on the face of it seems impossible, you can still say that computers have understanding by virtue of the matter that they contain rather than by virtue of the programs they run. -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 21 23:12:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 21 Jan 2010 15:12:02 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <785532.27412.qm@web36508.mail.mud.yahoo.com> --- On Thu, 1/21/10, Stathis Papaioannou wrote: > As I and others have said numerous times, it's quite > obvious that meaning could *only* come from the association of one > symbol or input with another symbol or input. If we want to know the meaning of a word we can look it up in the dictionary and see "one symbol associated with other symbols". That trivial fact does not interest me, nor should it interest you, unless of course you can show me that the dictionary itself actually understands the words it defines. In that case the dictionary has overcome the symbol grounding problem. -gts From stathisp at gmail.com Thu Jan 21 23:33:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 22 Jan 2010 10:33:09 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <610150.14412.qm@web36508.mail.mud.yahoo.com> References: <610150.14412.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/22 Gordon Swobe : > If philosophical zombies can exist then by definition they can exist only with no mental contents, i.e., with no conscious intentionality -- with no conscious beliefs or realizations about anything whatsoever. If you have conscious awareness of such things going on in your head then you know you're not a zombie. Yes, but if it were possible to make brain components that function like the brain in every way except lacking consciousness, then it would be possible to arbitrarily remove any aspect of a persons consciousness and they would not realise that anything had changed. That would mean you could be at least partially a zombie right now and not realise it; for example, you might be blind or aphasic, even though you are convinced you are not. If you think this is absurd, as it seems you do, then you have to find a way to avoid the absurdity. The obvious way to avoid it is to say that it is impossible to separate consciousness from brain function. The only other way I can think of to avoid it is to deny that consciousness is caused by the brain, but that isn't acceptable to you. Do you have another way to avoid it that you haven't yet revealed? -- Stathis Papaioannou From stathisp at gmail.com Thu Jan 21 23:37:31 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 22 Jan 2010 10:37:31 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <758269.36155.qm@web36505.mail.mud.yahoo.com> References: <758269.36155.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/22 Gordon Swobe : > Some people seem to deny the existence of consciousness and thus their own experiences of life in what look to me like vain attempts to escape the conclusion that humans might have something computers do not have. I don't have much to say to them. I agree that these people can't really deny the existence of consciousness. They must be meaning something other than what it looks like, or being provocative. -- Stathis Papaioannou From stathisp at gmail.com Thu Jan 21 23:54:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 22 Jan 2010 10:54:19 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <580930c21001210643x63117315s9f0063d6cc5a1d2b@mail.gmail.com> References: <4B53990B.8010202@rawbw.com> <4B568B9A.7010400@rawbw.com> <4B57B1AC.2060906@rawbw.com> <710b78fc1001201928u789465e0kf8b753c3cc0cf7bc@mail.gmail.com> <710b78fc1001202046n9d17f8ci8ad5c3a7a377b078@mail.gmail.com> <580930c21001210643x63117315s9f0063d6cc5a1d2b@mail.gmail.com> Message-ID: 2010/1/22 Stefano Vaj : > 2010/1/21 Stathis Papaioannou : >> Computers can count, but I really doubt that they have >> anything like my feeling. > > Yes, but this is also true for fruitflies (organic brains, no doubt), > bats (mammals!), chimps, men informed by a different worldview and > cultural background, and ultimately even member of your family, whose > "feelings" you would never be able to experience first-hand, > > All that is therefore materially irrelevant for the discussion > "organic brains deal with information in ways that are qualitatively > different from other systems". Yes, I went on to say that a human-level AI would have these feelings. -- Stathis Papaioannou From ablainey at aol.com Fri Jan 22 01:01:54 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 21 Jan 2010 20:01:54 -0500 Subject: [ExI] thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><816ABDBA8A5A4F3AB07B7525C63D4A95@spike> <8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com> Message-ID: <8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> Hi All, Just as a shoot off subject from the thought controlled robot arm Video I just wanted to point out my thoughts regarding it that may be food for thought. In my experiment I controlled a 5 axis robot arm using an Epoc EEG headset. Although the ammount of time I spent learning to operate it and actual experimeting with it only runs to maybe 10 hours. It did in some way become an asscociated part of my body. By that I mean I was getting to a stage where operation sometimes required little more effort than moving my real arms. And although there was no actual nueral feedback, just visual and auditory. I did devolope a psuedo-feeling while operating it. I seemed to actually feel regional sensations in the brain triggered by each specific congnitiv action required for each movement of the robot. In a sense I feel I was on the cusp of mentally fully excepting it as a third limb. Very odd and almost to vague for description. I am convinced that if I were permanatly connected to it, It would become a fully excepted body part in very little time. One that I would be unwilling to loose. The odd thing now is when I look at the robot arm, sitting there lifeless on the desk, I now already have a slight feeling of loss! While similar to loosing an actual body part it is different as I know the part can be reconnected. A loss of ability more than the physical part. The whole affair is very strange as it is first hand (LOL) experience relating to many theoretical subjects discussed here. To me the question of copy vs original seems slightly altered by this experience. Something I will think about in depth when I can find a decade or two. Just thought I would share as It has further highlighted to me the difference between pure logical theory arguement and the actual reality. Also it further shows just how maluable the concept of 'physical self' really is. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Fri Jan 22 04:30:18 2010 From: femmechakra at yahoo.ca (Anna Taylor) Date: Thu, 21 Jan 2010 20:30:18 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies Message-ID: <719643.84470.qm@web110416.mail.gq1.yahoo.com> > Gordon wrote: > If philosophical zombies can exist [...] >>John wrote: >>Then Darwin's Theory of Evolution is wrong. What if his theory of evolution was based on "his" own perspective. It's called "Darwin's":) What makes it 100% viable? Last I heard it was still a theory:) > In my view every conscious thought or emotion has a physical correlate > in the brain. >> In the brain? Why not in the neuron? You have said that signals >> between neurons are not involved in consciousness, so they can't >> cooperate to generate it, so one neuron will have to do even >> though you can't explain why there aren't 100 billion conscious >> beings inside your head. Does it matter if it's in the neuron or the brain? Are you both biologists? I think at times people become zombies. They live in their own world with their own ideals, thoughts and ideas. They have no desire to relate, idealise or think. I would usually say that women and men that have recently had families are very much like zombies. Their conscious state revolves and depends on others. Nothing wrong with that. I strongly believe that "sole" consciousness is very different from "collective". Not that one is more important than the other, just different. In my "sole" conscious mind my 100 billion thoughts (in a "sole" there are anywhere from 1 to 100 billion conscious thoughts or beings inside your head) are my own but I can imagine the family that has chosen to consciously evolve and produce that "hopeful" one neuron to spread. Not everything is black or white. Darwin is a theory, Psi is not mathematically possible and John will never agree unless he sees it with his own eyes;) Just a thought:) __________________________________________________________________ Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now http://ca.toolbar.yahoo.com. From spike66 at att.net Fri Jan 22 06:28:24 2010 From: spike66 at att.net (spike) Date: Thu, 21 Jan 2010 22:28:24 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: <719643.84470.qm@web110416.mail.gq1.yahoo.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> Message-ID: <9711229417674894B95766237C126A85@spike> > ...On Behalf Of Anna Taylor > ...Psi is not mathematically possible and John will > never agree unless he sees it with his own eyes;) Anna Hi Anna, Damien has made a good point in the past few weeks. With many of us, psi is something with which we will never agree *even if we do see it with our own eyes* as evidenced by my own disinterest in really studying the available data. I confess it freely: if I have no theoretical basis for an explanation, even a far-fetched one, I would necessarily assume I was misunderstanding something fundamental about the test or the data. The outcome of the Nolan sisters experiment is a good example; I don't know why the result came out that way, like their music anyway. Damien has a book in which he suggests some possible theoretical basis for otherwise unexplainable phenomena, but I haven't made the effort to read it. Regarding seeing it with their own eyes, should Marley's ghost appear to me in the night, I would not be the least frightened, for I could only assume either wildly sophisticated trickery for the purpose of humor (which I myself have been known to engage) or halucination, for I know as firmly as I know anything that ghosts are not real. I would know this just as firmly even if Marley's ghost were to appear making comments such as "Eh mon! Lighten up! Eets Chreestmahhsss!" spike From stathisp at gmail.com Fri Jan 22 09:12:20 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 22 Jan 2010 20:12:20 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <785532.27412.qm@web36508.mail.mud.yahoo.com> References: <785532.27412.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/22 Gordon Swobe : > --- On Thu, 1/21/10, Stathis Papaioannou wrote: > >> As I and others have said numerous times, it's quite >> obvious that meaning could *only* come from the association of one >> symbol or input with another symbol or input. > > If we want to know the meaning of a word we can look it up in the dictionary and see "one symbol associated with other symbols". That trivial fact does not interest me, nor should it interest you, unless of course you can show me that the dictionary itself actually understands the words it defines. In that case the dictionary has overcome the symbol grounding problem. At some point, there must be an association between a symbol and one of the special symbols which are generated by sensory data. Then the symbol is "grounded". -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Jan 22 13:23:26 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 22 Jan 2010 05:23:26 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <844360.88902.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/22/10, Stathis Papaioannou wrote: > At some point, there must be an association between a > symbol and one of the special symbols which are generated by sensory > data. Then the symbol is "grounded". You misunderstand symbol grounding. It's not about association of symbols with other symbols, per se. It's about comprehension of those symbols. The good people at Merriam Webster associate words with other words on paper and then publish those printed associations. Those words and their associated words are grounded only to the extent that some agent(s) comprehends the meanings of them. If every agent capable of comprehending word meanings died suddenly, they would leave behind dictionaries filled with ungrounded symbols. The words defined in those dictionaries would remain physically associated the words in their definitions, but nobody would be around to know what any of the symbols meant. The words would remain associated but they would become ungrounded. http://en.wikipedia.org/wiki/Symbol_grounding -gts From gts_2000 at yahoo.com Fri Jan 22 14:41:56 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 22 Jan 2010 06:41:56 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <873824.16326.qm@web36504.mail.mud.yahoo.com> --- On Thu, 1/21/10, Stathis Papaioannou wrote: >> To say "there is no logical pathway from a lump of >> matter to meaning" is equivalent to saying that mind and >> matter exist in separate realms. It seems then that you >> really do want to espouse the mind/matter dualism handed >> down to us from Descartes. > > I'm saying this to show where your assertion that syntax > can't produce meaning leads. My assertion leads simply to a philosophy of mind in which the brain attaches meanings to symbols in some way that we do not yet fully understand. Nothing more. In the next step of our journey we must decide between monism and not-monism (usually dualism). I choose monism. Looks to me like the world is comprised of just one kind of stuff. Some configurations of that one stuff have conscious understanding of symbols. Most if not all other configurations of that stuff do not. -gts From eschatoon at gmail.com Fri Jan 22 15:10:24 2010 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Fri, 22 Jan 2010 16:10:24 +0100 Subject: [ExI] EPOC EEG headset In-Reply-To: <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> Message-ID: <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> Wow impressive. I am also interested in Bryan's question: how fast can you type? The video is impressive for those who already know about EPOC, other viewers would not realize that you are controlling the robot arm by thought. Next time you may wish to include yourself with the device on, and your hands still. G. On Thu, Jan 21, 2010 at 6:02 AM, Bryan Bishop wrote: > 2010/1/20 ?: >> Hi Everyone. >> Just thought i'd stop lurking to give you a link to a video I have just put >> on Youtube. I have just got my hands on an EPOC headset. Which, for those >> that don't know. It reads various thought patterns which can be interpreted >> and used to trigger keyboard events etc. I have cobbled together a 5 axis >> robot arm, switchboxes and a quick bit of software to read the keyboard >> event which are output from the Epoc and the result is a brain controlled >> robot arm. >> I don't need to state the implications. >> >> http://www.youtube.com/watch?v=4Cq35VbRpTY >> >> Leave a comment, or let me know if you want to replicate the set up for your >> self. > > Thanks for that, Alex. Can you please take a typing test and report > what sort of wpm you're able to achieve? > > - Bryan > http://heybryan.org/ > 1 512 203 0507 > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From spike66 at att.net Fri Jan 22 17:30:45 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 09:30:45 -0800 Subject: [ExI] leukocyte chasing bacterium Message-ID: <77E61E506ABE446FB1D3A6F1A3A19E13@spike> Here's a cool video I wish the old timers could have been able to see, Pasteur and those cats. A white blood cell is chasing and seeking to devour a bacterium. On finds oneself almost cheering for the germ. It would be a weird one of course: Go little bacterium! That bad old leukocyte is right on your a... your... um... http://www.youtube.com/watch?v=JnlULOjUhSQ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Jan 22 17:19:48 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 22 Jan 2010 12:19:48 -0500 Subject: [ExI] digital simulations, descriptions and copies. In-Reply-To: <6646.24698.qm@web36508.mail.mud.yahoo.com> References: <6646.24698.qm@web36508.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe wrote 4, in none of those posts did he address any of the objections I raised. No doubt he will be unable to answer these newer objections either. > To say "there is no logical pathway from a lump of matter to meaning" is equivalent to saying that mind and matter exist in separate realms. Well they do exist in separate realms, nouns adjectives and verbs do too. So what? > > it remains nevertheless true that the man in the chinese room cannot understand the meanings of the symbols And it remains nevertheless true that the understanding or lack thereof of that silly little man in the chinese room is irrelevant in determining if understanding is involved in the situation. > they need to explain how a program and its hardware can get semantics from the syntactic rules programmed into the machine by the programmer. Why do they "need" to explain it? At it's deepest level nobody can explain how gravity works but that doesn't stop logical and intelligent people from believing that gravity exists because the evidence is overwhelming, and in a similar way the evidence is overwhelming that syntax can produce semantics and if a hairless ape on the third rock from the sun can't figure out how that works it doesn't make it any less true. And I would humbly suggest that you take a temporary hiatus in the use of the words "semantics" and "syntax" until you have some idea what the words mean so as to avoid embarrassment such as your recent blunder when you said that even humans cant get semantics from syntax. > If we want to know the meaning of a word we can look it up in the dictionary and see "one symbol associated with other symbols And somebody can point to one of those symbols in the dictionary and then point to something in the real world and then we get the semantics that the symbol represents a real object. > My assertion leads simply to a philosophy of mind in which the brain attaches meanings to symbols in some way that we do not yet fully understand. A 1950's punch card machine attaches a meaning to a hole in a card, a meaning that says put this card in that column. What don't you understand about that? > Looks to me like the world is comprised of just one kind of stuff. It sure as hell doesn't look that way to me! http://en.wikipedia.org/wiki/Symbol_grounding Thanks but I already know how to get to Wikipedia. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Fri Jan 22 17:34:02 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 22 Jan 2010 11:34:02 -0600 Subject: [ExI] Book: What is Posthumanism? In-Reply-To: <57803627CD39584B9492656EB555A51E0111A2E53B@b65-exmb3> References: <57803627CD39584B9492656EB555A51E0111A2E53B@b65-exmb3> Message-ID: I just came across Carey Wolf's writings, and noticed this book which has a publish date of January 2010. His current book is What is Posthumanism? http://search.barnesandnoble.com/What-Is-Posthumanism/Cary-Wolfe/e/978081666 6157#TOC (He addresses issues also discussed within the framework of transhumanism.)* Wolf's linking posthumanism to second order cybernetics (cybernetics of cybernetics) caught my eye because I am currently writing about cybernetics of cybernetics as linked to transhuman. Anyone who has thoughts/insights on this link would be beneficial. Natasha *[Within the arts, posthumanism has been argued as being more thought-provoking, insightful, rigorous, sophisticated than transhumanism. Of course, most of this stems from a strategic positioning of posthumanism within academia. (Hayles, Ihde, Peters, etc. who found fault with transhumanism in the Metanexus Global Spiral magazine, which I was invited to be Guest Editor to respond to their claims http://www.metanexus.net/magazine/tabid/68/id/10693/Default.aspx (and with a follow-up at the recent conference where More, de Grey and I were invited to speak about our responsive papers.) http://www.metanexus.net/conference2009/featured_speakers.aspx ] Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: att8ece0.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From eric at m056832107.syzygy.com Fri Jan 22 18:18:04 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 22 Jan 2010 18:18:04 -0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <64697.50664.qm@web36501.mail.mud.yahoo.com> References: <64697.50664.qm@web36501.mail.mud.yahoo.com> Message-ID: <20100122181804.5.qmail@syzygy.com> Gordon writes: >PS. Like most everything in the natural world including apples, the > human brain appears to be a non-digital object. This is only true in the very narrow sense in which a 74LS00 integrated circuit is a non-digital object. The digital nature of neurons is one of the things which makes them most useful as information processors. The key characteristic of digital circuits is that they quantize an analog signal into discrete values. That 74LS00 accepts a wide range of analog voltages on it's input pins, but the output is insensitive to small changes in the input voltage. It divides the input signal space into low voltages which are considered zeros, and high voltages which are considered ones. The action potential of a neuron looks more like the output of a digital logic gate than like the output of an analog circuit. It's an all-or-nothing response. When the integrative portion of the neuron crosses a threshold voltage the output voltage of the neuron spikes in a pulse which is relatively uniform and insensitive to small changes in the input voltages. Successive gates in a digital circuit design restore the voltage level of signals to their discrete values, preventing losses from accumulating during multiple stages of processing. Successive neurons in a brain restore signal levels in much the same way, and enable multi-stage processing via the same mechanism. This is an important way in which the brain acts very much like a digital computer. It is not incidental to the functioning of the brain, it is fundamental. It's also one reason why analysis of the brain at the neural level makes sense, and the details of what happens inside each neuron can be abstracted. Another reason why it makes sense to draw an abstraction boundary around neurons is that there is already a natural boundary around neurons: the cell wall. Most of the interesting behaviors of neurons happen because the cell wall is only selectively permeable to various molecules and ions. That barrier reduces the complexity of the interactions which can occur across it, so it is a perfect place to put an abstraction layer. As far as digital simulations and copies are concerned, consider the following substitution: >For example you might create a digital simulation of an apple on your > digital computer. Your digital simulation of an apple will appear > very much like a real apple, but you will find it difficult to > eat. The reason you cannot eat that apple should be pretty obvious: > it's not really an apple. It's merely a digital simulation of a > non-digital object. For example you might create a digital simulation of a thermostat on your digital computer. Your digital simulation of a thermostat will appear very much like a real thermostat, but you will find it difficult to use it to regulate temperature. The reason you cannot regulate temperature with that thermostat should be pretty obvious: it's not really a thermostat. It's merely a digital simulation of a non-digital thermostat. Most thermostats installed today are digital simulations of analog thermostats. They manage to get the job done anyway. And yes, a digital simulation of a person should enjoy eating a digital simulation of an apple. If not, it's not a very good simulation. -eric From jonkc at bellsouth.net Fri Jan 22 17:51:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 22 Jan 2010 12:51:55 -0500 Subject: [ExI] digital simulations, descriptions and copies. In-Reply-To: <719643.84470.qm@web110416.mail.gq1.yahoo.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> Message-ID: <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> On Jan 21, 2010, Anna Taylor wrote: > > What if his theory of evolution was based on "his" own perspective. > It's called "Darwin's":) What makes it 100% viable? Last I heard it > was still a theory And what do you think the word "theory" means, a guess? A theory is a group of thoughts to explain something, such as Copernicus's theory that the Earth goes around the sun or Newton's theory of gravity or the theory of cause and effect. Some theories explain things better that others and no theory explains things better than Darwin's. I do give you credit for realizing that Gordon's ideas are totally incompatible with Darwin's, but joining the creationists camp seems like a very high price to pay to embrace his looney teachings. > Does it matter if it's [consciousness] in the neuron or the brain? Yes it matters. Gordon says signals between neurons are not involved in consciousness, that means there are 100 billion completely independent entities in your head with absolutely no way to interact with each other in any way. And that is idiocy of the highest order. > Darwin is a theory, Psi is not mathematically possible and John will never agree unless he sees it with his own eyes That is not true at all, I don't need to personally experience Psi, as I've said many many times just show me a pro Psi article in Nature or Science, that's all I ask. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Fri Jan 22 18:22:42 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 13:22:42 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> Message-ID: <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> Thanks, I didn't reply to Bryans question as I am already exporing the typing speed question and wanted a good answer for him. However I must point out that the Epoc has a limited number of triggers, not enough for a full alphabet. So as such it is not designed for direct typing. I am looking at a way around this using mouse based, predictive typing software. I know the exact piece of software I need and as soon as I get my hands on it, I will begin testing and let you know how fast it is. If it works I will make a video so you can see how it works. If it does work as predicted, It could spell the death of the keyboard! replaced by thought activated typing. Yes I shold be doing further videos with better quality and some explaination of what is happening. I imagine someone with no knowledge of the EPOC would simple think 'A robot controlle by computer, So what?' and not realise that it is actually being controlled directly by thought. The video was intended for those who already had some knowledge. For those that do. This is (I am told by Emotiv) the most cognitiv actions used for an application so far. the control of the arm uses 8 congnitiv (discrete triggering thoughts) and 2 facial movements to control 10 seperate movements of the arm. I have already moved on to applying the system to wheelchair control for Cerebral palsy sufferers and qudroplegics. Early days, but lots of promise. -----Original Message----- From: Giulio Prisco (2nd email) Wow impressive. I am also interested in Bryan's question: how fast can you type? The video is impressive for those who already know about EPOC, other viewers would not realize that you are controlling the robot arm by thought. Next time you may wish to include yourself with the device on, and your hands still. G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Fri Jan 22 18:27:24 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 22 Jan 2010 12:27:24 -0600 Subject: [ExI] EPOC EEG headset In-Reply-To: <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> Message-ID: <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> 2010/1/22 : > I didn't reply to Bryans question as I am already exporing the typing speed > question and wanted a good answer for him. However I must point out that the > Epoc has a limited number of triggers, not enough for a full alphabet. So as > such it is not designed for direct typing. > I am looking at a way around this using mouse based, predictive typing > software. I know the exact piece of software I need and as soon as I get my > hands on it, I will begin testing and let you know how fast it is. If it > works I will make a video so you can see how it works. If it does work as > predicted, It could spell the death of the keyboard! replaced by thought > activated typing. Mouse-based typing sucks immensely. Most EEG headsets get you up to 15wpm. I can do 120wpm on a QWERTY, and 150wpm on a (very) good day. But if you insist on a mouse, try this on for size: http://www.inference.phy.cam.ac.uk/dasher/ downloads: http://www.inference.phy.cam.ac.uk/dasher/Download.html > For those that do. This is (I am told by Emotiv) the most cognitiv actions > used for an application so far. the control of the arm uses 8 congnitiv > (discrete triggering thoughts) and 2 facial movements to control 10 seperate > movements of the arm. What's the scaling factor? How many commands can you squeeze out per channel that you add to an EEG device? I know you can do more with better signal processing, but I haven't been able to find scholarly literature on this. - Bryan http://heybryan.org/ 1 512 203 0507 From ablainey at aol.com Fri Jan 22 18:51:42 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 13:51:42 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> Message-ID: <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> That's the very one I was after. Cheers for the link, I couldn't find it when I looked. Maybe because I was searching for 'rudolf' LOL. -----Original Message----- From: Bryan Bishop But if you insist on a mouse, try this on for size: http://www.inference.phy.cam.ac.uk/dasher/ downloads: http://www.inference.phy.cam.ac.uk/dasher/Download.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Jan 22 18:56:09 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 22 Jan 2010 11:56:09 -0700 Subject: [ExI] heaves a long broken psi In-Reply-To: <9711229417674894B95766237C126A85@spike> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> Message-ID: <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Spike, Would it take Almighty God giving you his business card to finally convince you of the supernatural? John ; ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Jan 22 19:13:15 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 13:13:15 -0600 Subject: [ExI] heaves a long broken psi In-Reply-To: <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Message-ID: <4B59F8CB.4040604@satx.rr.com> On 1/22/2010 12:56 PM, John Grigg wrote: > Would it take Almighty God giving you his business card to finally > convince you of the supernatural? Not necessary, an article in Nature or Science would do it. Damien Broderick From thespike at satx.rr.com Fri Jan 22 19:20:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 13:20:11 -0600 Subject: [ExI] psi in Nature In-Reply-To: <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> Message-ID: <4B59FA6B.7060400@satx.rr.com> On 1/22/2010 11:51 AM, John Clark wrote: > I don't need to personally experience Psi, as I've said many many times > just show me a pro Psi article in Nature or Science, that's all I ask. Obviously you don't mean that. Here: R. Targ and H. E. Puthoff, ?Information Transmission under Conditions of Sensory Shielding,? Nature, vol. 252, pp. 602-607 (October 18, 1974) Anticipated response: "Yeah, right, and what else have they published there in the last 36 years? BULLSHIT!" Oh, so you *didn't* mean what you wrote. Damien Broderick From sparge at gmail.com Fri Jan 22 19:29:31 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 22 Jan 2010 14:29:31 -0500 Subject: [ExI] psi in Nature In-Reply-To: <4B59FA6B.7060400@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> <4B59FA6B.7060400@satx.rr.com> Message-ID: On Fri, Jan 22, 2010 at 2:20 PM, Damien Broderick wrote: > > R. Targ and H. E. Puthoff, ?Information Transmission under Conditions of > Sensory Shielding,? Nature, vol. 252, pp. 602-607 (October 18, 1974) http://www.nature.com/nature/journal/v251/n5476/abs/251602a0.html -Dave From sjatkins at mac.com Fri Jan 22 20:03:52 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 22 Jan 2010 12:03:52 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Message-ID: On Jan 22, 2010, at 10:56 AM, John Grigg wrote: > Spike, > > Would it take Almighty God giving you his business card to finally convince you of the supernatural? What does this "supernatural" mean, exactly. If we are in a sim would the computer the sim is running on be in a supernatural realm? Why or why not? - s From sjatkins at mac.com Fri Jan 22 20:11:26 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 22 Jan 2010 12:11:26 -0800 Subject: [ExI] psi in Nature In-Reply-To: <4B59FA6B.7060400@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> <4B59FA6B.7060400@satx.rr.com> Message-ID: <5B997284-0C59-4CD5-BA64-4746244C2575@mac.com> On Jan 22, 2010, at 11:20 AM, Damien Broderick wrote: > On 1/22/2010 11:51 AM, John Clark wrote: > >> I don't need to personally experience Psi, as I've said many many times >> just show me a pro Psi article in Nature or Science, that's all I ask. > > Obviously you don't mean that. Here: > > R. Targ and H. E. Puthoff, ?Information Transmission under Conditions of Sensory Shielding,? Nature, vol. 252, pp. 602-607 (October 18, 1974) IIRC that article came under serious dispute later. I have no doubt that reputable scientists have studied psi and that more than a few are convinced of its reality. Whether their evidence is convincing to myself or others is a different matter. Personally I have had some anecdotal experiences I cannot explain without it, and I am quite imaginative and inventive of explanations. But that is not scientific evidence of its reality of course. And I have no real idea of what makes at least some psi work or what it implies. It is annoying that it seems to be extremely undependable and difficult to formally test, whatever the heck it may be. But I cannot simply dismiss it. - samantha From sparge at gmail.com Fri Jan 22 20:18:02 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 22 Jan 2010 15:18:02 -0500 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Message-ID: On Fri, Jan 22, 2010 at 3:03 PM, Samantha Atkins wrote: > > What does this "supernatural" mean, exactly. Above the laws of nature. > ?If we are in a sim would the computer the sim is running on be in a supernatural realm? ?Why or why not? The simulating computer would not be in the simulated realm, so it'd be supernatural. But it'd also be undetectable from within the simulated realm. Things we consider supernatural (ESP, vampires, God) could be implemented in a simulated realm simply by making things happen that aren't possible under the programmed laws of physics. -Dave From sparge at gmail.com Fri Jan 22 20:21:53 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 22 Jan 2010 15:21:53 -0500 Subject: [ExI] psi in Nature In-Reply-To: <5B997284-0C59-4CD5-BA64-4746244C2575@mac.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> <4B59FA6B.7060400@satx.rr.com> <5B997284-0C59-4CD5-BA64-4746244C2575@mac.com> Message-ID: On Fri, Jan 22, 2010 at 3:11 PM, Samantha Atkins wrote: > > On Jan 22, 2010, at 11:20 AM, Damien Broderick wrote: >> >> R. Targ and H. E. Puthoff, ?Information Transmission under Conditions of Sensory Shielding,? Nature, vol. 252, pp. 602-607 (October 18, 1974) > > IIRC that article came under serious dispute later. http://www.zem.demon.co.uk/flim.htm -Dave From sjatkins at mac.com Fri Jan 22 20:24:36 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 22 Jan 2010 12:24:36 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Message-ID: <13DC89A2-C27D-4CEA-A77C-5EF791CD4DF5@mac.com> On Jan 22, 2010, at 12:18 PM, Dave Sill wrote: > On Fri, Jan 22, 2010 at 3:03 PM, Samantha Atkins wrote: >> >> What does this "supernatural" mean, exactly. > > Above the laws of nature. > >> If we are in a sim would the computer the sim is running on be in a supernatural realm? Why or why not? > > The simulating computer would not be in the simulated realm, so it'd > be supernatural. But it'd also be undetectable from within the > simulated realm. So by supernatural you mean not governed by the physics of your local space-time bubble? If so then the universe (as space-time bubble within the multiverse) exists within a supernatural realm. That is a tad uncomfortable, no? - samantha From thespike at satx.rr.com Fri Jan 22 20:33:50 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 14:33:50 -0600 Subject: [ExI] psi in Nature In-Reply-To: <5B997284-0C59-4CD5-BA64-4746244C2575@mac.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> <4B59FA6B.7060400@satx.rr.com> <5B997284-0C59-4CD5-BA64-4746244C2575@mac.com> Message-ID: <4B5A0BAE.1020702@satx.rr.com> On 1/22/2010 2:11 PM, Samantha Atkins wrote: >> > R. Targ and H. E. Puthoff, ?Information Transmission under Conditions of Sensory Shielding,? Nature, vol. 252, pp. 602-607 (October 18, 1974) > IIRC that article came under serious dispute later. Unlike any other paper ever published in Nature, none of which has ever been questioned. The objections were not fatal (indeed, some were specious), but of course it was widely assumed (on the basis of no investigation) that they must be, as always in this topic. Again, though, none of this is to the point of the thread; JKC said he'd accept the scientific probity of psi if an article corroborating it appeared in Nature or Science. This one did. >I have no doubt that reputable scientists have studied psi and that more than a few are convinced of its reality. That can't possibly be true--John Clark has never heard of them, and assures us they are really lavatory cleaners and truck drivers typing something up on their day off. Damien Broderick From thespike at satx.rr.com Fri Jan 22 20:42:39 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 14:42:39 -0600 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Message-ID: <4B5A0DBF.50104@satx.rr.com> On 1/22/2010 2:18 PM, Dave Sill wrote: >> What does this "supernatural" mean, exactly. > Things we consider supernatural (ESP, vampires, God) This misuses the word. If ESP is real, there is no reason to suppose that it functions by abrogating the laws of physics; far more economical to suppose that we do not yet fully understand all those laws. In the 19th century, the radioactive heating of the sun was not supernatural, just unexplained. Believing the sun was a god was a supernatural explanation, but not a very useful one. Vampires, if they existed outside fiction, would not be supernatural either, unless their powers are described as deriving directly from satanic supernatural beings. "God" is certainly supernatural--outside the realm of nature entirely, ontologically distinct and superior to nature. And almost certainly non-existential. At any rate, I've never seen a paper in Nature or Science proving its existence. Damien Broderick From pharos at gmail.com Fri Jan 22 20:52:32 2010 From: pharos at gmail.com (BillK) Date: Fri, 22 Jan 2010 20:52:32 +0000 Subject: [ExI] heaves a long broken psi In-Reply-To: <4B5A0DBF.50104@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> Message-ID: On 1/22/10, Damien Broderick wrote: > This misuses the word. If ESP is real, there is no reason to suppose that > it functions by abrogating the laws of physics; far more economical to > suppose that we do not yet fully understand all those laws. In the 19th > century, the radioactive heating of the sun was not supernatural, just > unexplained. Believing the sun was a god was a supernatural explanation, but > not a very useful one. Vampires, if they existed outside fiction, would not > be supernatural either, unless their powers are described as deriving > directly from satanic supernatural beings. "God" is certainly > supernatural--outside the realm of nature entirely, ontologically distinct > and superior to nature. And almost certainly non-existential. At any rate, > I've never seen a paper in Nature or Science proving its existence. > > I saw a paper in Nature once that claimed that Uri Geller had psychic powers and wasn't just a tricky magician. Oh wait....... BillK ;) From sparge at gmail.com Fri Jan 22 20:57:04 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 22 Jan 2010 15:57:04 -0500 Subject: [ExI] heaves a long broken psi In-Reply-To: <13DC89A2-C27D-4CEA-A77C-5EF791CD4DF5@mac.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <13DC89A2-C27D-4CEA-A77C-5EF791CD4DF5@mac.com> Message-ID: On Fri, Jan 22, 2010 at 3:24 PM, Samantha Atkins wrote: > > So by supernatural you mean not governed by the physics of your local space-time bubble? Yes, outside the currently-applicable laws of nature. > If so then the universe (as space-time bubble within the multiverse) exists within a supernatural realm. I wasn't aware that that had been proven to be the case. Theorized, yes. >?That is a tad uncomfortable, no? Of course, but there's not much we can do about. The odds of hacking our simulated universe, if that's what it is, seem pretty low. -Dave From ablainey at aol.com Fri Jan 22 21:00:49 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 16:00:49 -0500 Subject: [ExI] psi in Nature In-Reply-To: <5B997284-0C59-4CD5-BA64-4746244C2575@mac.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net><4B59FA6B.7060400@satx.rr.com> <5B997284-0C59-4CD5-BA64-4746244C2575@mac.com> Message-ID: <8CC69BD1C2C61CE-5414-391@webmail-m010.sysops.aol.com> Likewise I have personal anacdotal experience of psi. I have previously put forward the idea that psi could be attributed to quantum entaglement. If the atom of the universe have mostly existed since the big bang, I stands to reason that a percentage of mater exists in a quantum entangled state. (it does to me anyway) >From this it isnt much of a jump to imagine that atoms in the brain of one idividual are state locked with atoms in another. With this it is not beyond the realms of possibility that the brain can influence the state of the atoms to broadcast information. All entagled atoms would change state accordingly thus passing the information to their host brain. The level of psi ability would therefor be dependant on the quantity of entagled atoms in each individuals brain. -----Original Message----- From: Samantha Atkins IIRC that article came under serious dispute later. I have no doubt that reputable scientists have studied psi and that more than a few are convinced of its reality. Whether their evidence is convincing to myself or others is a different matter. Personally I have had some anecdotal experiences I cannot explain without it, and I am quite imaginative and inventive of explanations. But that is not scientific evidence of its reality of course. And I have no real idea of what makes at least some psi work or what it implies. It is annoying that it seems to be extremely undependable and difficult to formally test, whatever the heck it may be. But I cannot simply dismiss it. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Fri Jan 22 17:17:45 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 22 Jan 2010 09:17:45 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: <9711229417674894B95766237C126A85@spike> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> Message-ID: I think it goes deeper than clinging to one's own worldview in spite of the evidence, or outright avoiding possible evidence that would contradict it. I say this because I don't meditate. I fully believe that if a person meditate regularly over a long period of time, they will naturally start to hallucinate in the process. Waking dreams, cryptic muttering from the subconscious. Now, considering the fact I enjoy my dreams and would very much like the ability to get more of them and experience them consciously, one would expect me to meditate every day. But I don't, and I'm not entirely sure why. I can identify one rationalization: I don't know how to meditate. Why, then, do I do nothing to learn? More rationalizations pop up: there are too many different types of meditation, I don't know which one to choose, it's probably not the kind of thing you could learn off the Internet, I would have to take a class. But I haven't even done the research to know these things! They're just convenient guesses. So, even within my own worldview I am actively avoiding the weird. Even the weird I say that I want. I suspect it's a cultural thing, and not an inborn tendency of the human species. It may have been useful to bring us up to the scientific sophistication we have today, but now I feel like it's only holding us back; we have the technology to do just about anything we could want to do, short of molecular reconfiguration, but we can't seem to think of anything to do with it. Before artificial intelligence becomes a possibility, before the first mind is uploaded, we have to understand a whole lot more weirdness. At this rate, we have a long time to wait. From sparge at gmail.com Fri Jan 22 21:08:28 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 22 Jan 2010 16:08:28 -0500 Subject: [ExI] heaves a long broken psi In-Reply-To: <4B5A0DBF.50104@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> Message-ID: On Fri, Jan 22, 2010 at 3:42 PM, Damien Broderick wrote: > On 1/22/2010 2:18 PM, Dave Sill wrote: > >> Things we consider supernatural (ESP, vampires, God) > > This misuses the word. I did say *consider* supernatural. That leaves room for these entities/phenomena to not actually be supernatural. > If ESP is real, there is no reason to suppose that it > functions by abrogating the laws of physics; far more economical to suppose > that we do not yet fully understand all those laws. Agreed. > Vampires, if they existed outside fiction, would not be > supernatural either, unless their powers are described as deriving directly > from satanic supernatural beings. Vampires are not consistent with our current understanding of the laws of nature, If they were real, that would indicate that our understanding was lacking. -Dave From thespike at satx.rr.com Fri Jan 22 21:30:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 15:30:40 -0600 Subject: [ExI] vampires In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> Message-ID: <4B5A1900.7050300@satx.rr.com> On 1/22/2010 3:08 PM, Dave Sill wrote: >> > Vampires, if they existed outside fiction, would not be >> > supernatural either, unless their powers are described as deriving directly >> > from satanic supernatural beings. > > Vampires are not consistent with our current understanding of the laws > of nature, If they were real, that would indicate that our > understanding was lacking. It depends which vampires. As I understand it, vampires hibernate (not at all impossible), get their sustenance from drinking blood (ditto), exert great strength (some people are in fact strong), are damaged by sunlight (wellll, sort of possible, maybe--but perhaps not to the point of combustion, unless they have very strange skin indeed), and either morph their body shape into those of other animals such as bats or levitate by the pure force of their wicked intention (most unlikely, especially the latter). The word for this sort of thing, by and large, is preternatural. "Supernatural" requires a strict ontological divide. Damien Broderick From spike66 at att.net Fri Jan 22 21:31:36 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 13:31:36 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Message-ID: ...On Behalf Of John Grigg Subject: Re: [ExI] heaves a long broken psi Spike, Would it take Almighty God giving you his business card to finally convince you of the supernatural? John ; ) Interesting question from the Griggmeister. My worldview has no remaining infrastructure to support the existence of an almighty, or even a lesser version thereof. (Would that be called a somemighty? If even less powerful, would it be called a slightmighty?) What would freak my beak is if I somehow discovered additional evidence that we are all living in a digital simulation. Rather that I am, and all you guys don't actually exist at all, if by "exist" I mean something beyond a very sophisticated version of the avatars in Second Life. Avatars! All of yas! Kidding of course. Johnny, you are the only actual physical being, and we are the avatars. Somebody had to break it to you eventually. Might as well be a good friend, even if a digitally simulated one. spike From jrd1415 at gmail.com Fri Jan 22 21:49:23 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 22 Jan 2010 14:49:23 -0700 Subject: [ExI] heaves a long broken psi In-Reply-To: <9711229417674894B95766237C126A85@spike> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> Message-ID: Spike, Damien, Gordon, Stathis, Ben,...the bad boys, Eugen and John ..., all my friends, I can prove that psi is real. Last night, just as Spike was composing his email, I was enjoying my last walk of the day -- night actually -- beneath a moonless but star-bedazzled sky. One of the things I love about my winters here on the East Cape is the moonlit and starlit walks in the cloudless ampitheatre of the universe. Me the wife and the dogs. So, I'm walking along, pondering the three threads -- Gordon's "Syntax macht nicht semantics", the psi moment, and finally, The Looming Quantum Brain, when I paused momentarily, at the psi question. More specifically, paused to consider an answer to spike's original challenge regarding his lack of interest. Which I remember as something along the lines of "Show me a plausible mechanism, then we'll talk." I immediately had an answer, but I enjoyed it privately, avoiding the burden of typing it up. ... ****************************************** You know about the Aspect QM work. Decoherence happens. Instantaneously. Super-relativistically. Action at point 'a' in 3D space causes an INSTANTANEOUS effect at arbitrarily distant point 'b'. Others may offer alternative explanations, but to me, the explanation that tops the list -- I forget where I heard it -- admirably robust in its simplicity, is that the two entangled particles are in contact. (Isn't there a body of philosophical work dealing with the logical impossibility of action at a distance, implying the necessity of contact?, ) So the 'entangled' particles are in contact. [Get over it unless you've got a better explanation. Oh! I'm sorry. Did I write that out loud? My bad.] Absent the appearance of contact in any of the 4 D's we're familiar with, it seems so very reasonable to suppose that there must be another D, or several, wherein the apparent contact is mediated. And then, of course, we all know that the guys with the big hat sizes in physics have for some time now been talking up extra dimensions. "Ten or eleven at least...", they say, "...maybe as many as 26.", they say. "Gotta have it." they say. "'Standard Model" goes kaplooie without it", they say. They say that. Don't ask me why. My hat size is smaller. So, isn't this the answer to spike's challenge? Isn't this a confirmed extra-dimensional -- "extra ' to our four D's -- linkage that manifests itself in -- leaks over into -- our 4D realm? ******************************************** So, I was walking along, slightly stony no doubt, going through a variety of phrases where I replaced 'sigh' with 'psi', just as spike has done in the subject line above. I tried to find an exceedingly clever phrasing with which to deploy and intermingle the emotional and semantic nuances of the two words. I worked at it for several minutes -- AT EXACTLY THE TIME SPIKE WAS COMPOSING HIS POSTING -- came up with nothing. Nothing clever and special and satisfying, that is, and finally gave up, allowing myself to be satisfied with something that used the phrase "...heaved a psi." -- JUST AS SPIKE DID IN HIS POSTING. Coincidence? Puleeeeeese. So, what else is there to say? Case closed. Psi is proven. Next question. Best, Jeff Davis "And I think to myself, what a wonderful world!" Louie Armstrong From spike66 at att.net Fri Jan 22 21:38:36 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 13:38:36 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: <4B59F8CB.4040604@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike><2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B59F8CB.4040604@satx.rr.com> Message-ID: <98EA7926D9D5431F8F5E3D07779CB361@spike> > ...On Behalf Of Damien Broderick > Subject: Re: [ExI] heaves a long broken psi > > On 1/22/2010 12:56 PM, John Grigg wrote: > > > Would it take Almighty God giving you his business card to finally > > convince you of the supernatural? > > Not necessary, an article in Nature or Science would do it. > > Damien Broderick That would only work if the editors of Nature or Science were the almighty. One of the important messages that has come to our attention recently is that the peer review process is far from perfect. It is better than nothing, but it is subject to error and intentional malicious action, as well as all the usual economic pressures. Science and Nature are two very highly respected magazines, but they are under all the same constraints as Popular Mechanics and National Enquirer: they make their living by selling paper. If they don't give their magazine-buying audience what they want, they are busted. spike From jameschoate at austin.rr.com Fri Jan 22 22:08:04 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Fri, 22 Jan 2010 16:08:04 -0600 Subject: [ExI] vampires In-Reply-To: <4B5A1900.7050300@satx.rr.com> Message-ID: <20100122220804.UNTMD.374377.root@hrndva-web28-z01> I'll have to disagree here, preternatural means 'abnormal' and does not carry necessarily any connotation of transcendence. Supernatural however does as in 'beyond nature'. http://en.wikipedia.org/wiki/Preternatural (which matches both my understanding as well as several other references I checked). Any distinction one can make here would in fact be rather arbitrary and driven by the particular social group expectations rather than some objective measure. Hence the ontological significance is rather specious. ---- Damien Broderick wrote: > especially the latter). The word for this sort of thing, by and large, > is preternatural. "Supernatural" requires a strict ontological divide. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From spike66 at att.net Fri Jan 22 22:10:13 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 14:10:13 -0800 Subject: [ExI] vampires In-Reply-To: <4B5A1900.7050300@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <4B5A1900.7050300@satx.rr.com> Message-ID: <4DD956D3B9D84B4295DC477B5DA22E04@spike> > ...On Behalf Of Damien Broderick > Subject: [ExI] vampires > > On 1/22/2010 3:08 PM, Dave Sill wrote: > > >> > Vampires, if they existed outside fiction, would not be > >> > supernatural either, unless their powers are described > as deriving > >> > directly from satanic supernatural beings. > > > > Vampires are not consistent with our current understanding > of the laws > > of nature, If they were real, that would indicate that our > > understanding was lacking. What's supernatural about vampires? If defined as people who drink human blood, such a thing exists I can assure you. Granted they are some seriously whacked out people, and I get squicked just thinking about it, but the practice does exist. Interesting point however, for the example illustrates how we might theorize some straw-man aspect of vampires that does not exist (such as turning into a bat for instance) and then dismiss the notion based on that. Extrapolate that notion to psi. Try to imagine a form of psi which we know can exist under perfectly understandable circumstances, such as the following experiment: take a deck of cards and shuffle thoroughly. Flip the cards one at a time and try to guess beforehand what it will be. If you have a good memory, you will score better than 1 correct guess each time thru the deck, because the last few cards will have few possibilities from which to choose. If you have a perfect memory, your chances of getting the second to last card is 50% and the last one 100%. Of course this isn't *real* psi. Like artificial intelligence, as soon as we know how to program it, that is no longer actual artificial intelligence, but rather just good programming technique. So your job: come up with experiments that have a natural basis but are psi-like. spike From spike66 at att.net Fri Jan 22 22:12:32 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 14:12:32 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><9711229417674894B95766237C126A85@spike> Message-ID: > ...On Behalf Of > Jeff Davis ... > Subject: Re: [ExI] heaves a long broken psi > > Spike, Damien, Gordon, Stathis, Ben,...the bad boys, Eugen > and John ..., all my friends, > > I can prove that psi is real.... > So, what else is there to say? Case closed. Psi is proven. > Next question. > > Best, Jeff Davis Jeff you are a treasure, pal. A true gift to the ExI chat list. May you live a thousand years, just for starters. spike From ablainey at aol.com Fri Jan 22 22:19:19 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 17:19:19 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com><55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> Message-ID: <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> I Just had a go with it. I had to make a quick program to generate mouse clicks from some of the EPOC outputs. But now I can type with Dasher, Highlight the text, copy, change window, and paste the text all hands free. Dasher is slower than typing, but my spelling would be better with it! I estimated about 50 wpm on my first attempt, but that should increase with use. But it is hands free! -----Original Message----- From: ablainey at aol.com To: extropy-chat at lists.extropy.org Sent: Fri, 22 Jan 2010 18:51 Subject: Re: [ExI] EPOC EEG headset That's the very one I was after. Cheers for the link, I couldn't find it when I looked. Maybe because I was searching for 'rudolf' LOL. -----Original Message----- From: Bryan Bishop But if you insist on a mouse, try this on for size: http://www.inference.phy.cam.ac.uk/dasher/ downloads: http://www.inference.phy.cam.ac.uk/dasher/Download.html _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Jan 22 22:36:23 2010 From: pharos at gmail.com (BillK) Date: Fri, 22 Jan 2010 22:36:23 +0000 Subject: [ExI] vampires In-Reply-To: <4DD956D3B9D84B4295DC477B5DA22E04@spike> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <4B5A1900.7050300@satx.rr.com> <4DD956D3B9D84B4295DC477B5DA22E04@spike> Message-ID: On 1/22/10, spike wrote: > > So your job: come up with experiments that have a natural basis but are > psi-like. > > Me, me, me...... At one time, in a previous existence, I was the whiz-kid network manager responsible for around 150 networked pcs. The users were many and varied, with many different levels of pc skills. The skilled users got themselves into really serious problems that required a good bit of work to extricate them from. My reputation as a tech god was not built by them, though. It was the less skilled user calls that made me famous. The call usually went along the lines of: 'X happens when I do Y and I can't get out of it. I've tried everything. I can't get my work done - you've got to fix it'. So I wander round to the pc smothered in furry animals and post-it notes with passwords written on them. First the soothing words, 'Don't panic, Bill's here - everything is going to be all right'. (The professional bedside manner is essential for installing confidence. The placebo effect is a real miracle). Then I stand behind them as they sit at their pc. (The laying-on of hands is optional, depending on how attractive they were). 'Now, calm down. Switch your pc off. Now switch it on and show me what you did'. Pause, while the pc whirred and clacked into life. Then -- 'OHH! It's not doing it now! It's working again! You're wonderful, Bill! I want to have your baby!' :) BillK From spike66 at att.net Fri Jan 22 22:17:56 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 14:17:56 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><9711229417674894B95766237C126A85@spike><2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Message-ID: > ...On Behalf Of spike ... > Subject: Re: [ExI] heaves a long broken psi Such egomaniacs are those who answer their own posts. Oh, wait... > > ...My worldview has > no remaining infrastructure to support the existence of an > almighty, or even a lesser version thereof. (Would that be > called a somemighty? If even less powerful, would it be > called a slightmighty?)... spike If even less powerful to the point of being only a bit more mighty than a natural being, would it then be known as a mitemighty? {8^D Again tickling the tail of the mightly ExI pundragon. spike From thespike at satx.rr.com Fri Jan 22 22:57:05 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 16:57:05 -0600 Subject: [ExI] vampires In-Reply-To: <20100122220804.UNTMD.374377.root@hrndva-web28-z01> References: <20100122220804.UNTMD.374377.root@hrndva-web28-z01> Message-ID: <4B5A2D41.6010602@satx.rr.com> On 1/22/2010 4:08 PM, jameschoate at austin.rr.com wrote: > I'll have to disagree here, preternatural means 'abnormal' and does not carry necessarily any connotation of transcendence. Supernatural however does as in 'beyond nature'. That's what I just said. Vampires are preternatural, not supernatural. > http://en.wikipedia.org/wiki/Preternatural (which matches both my understanding as well as several other references I checked). And golly, look what it says: "The term is often used to distinguish from the divine (supernatural) while maintaining a distinction from the purely natural. For instance, in certain theologies, the angels, both holy and fallen, are endowed with preternatural powers. Their intellect, speed, and other characteristics are described as beyond human capacities but yet still finite. Some examples of preternatural creatures in fiction include fairies, werewolves, vampires," Damien Broderick From thespike at satx.rr.com Fri Jan 22 23:01:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 17:01:11 -0600 Subject: [ExI] EPOC EEG headset In-Reply-To: <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com><55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> Message-ID: <4B5A2E37.1000308@satx.rr.com> On 1/22/2010 4:19 PM, ablainey at aol.com wrote: > I Just had a go with it. I had to make a quick program to generate mouse > clicks from some of the EPOC outputs. But now I can type with Dasher, > Highlight the text, copy, change window, and paste the text all hands free. > Dasher is slower than typing, but my spelling would be better with it! > I estimated about 50 wpm on my first attempt, but that should increase > with use. But it is hands free! How is this better than dictating to something like Dragon NaturallySpeaking? It's marvelous, but so is typing with your elbow. No great benefit unless your vocal chords and fingers have been removed or disabled. (Is the advantage that it can be done silently? That could be important on a plane, say, or in a crowded room. If you could do this without being arrested as a Terrorist with a Device.) Damien Broderick From jameschoate at austin.rr.com Fri Jan 22 23:02:17 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Fri, 22 Jan 2010 23:02:17 +0000 Subject: [ExI] vampires In-Reply-To: <4DD956D3B9D84B4295DC477B5DA22E04@spike> Message-ID: <20100122230217.16F2E.374954.root@hrndva-web28-z01> There is a good point to be made here, that is seldom actually made. Existence trumps interpretation. Therefore if vampires did exist they by definition would be natural. The term/phrase 'natural' is a anthropocentricism and actually has no significance. If anything it's a clear indicator, though unintended, of our limited thought process. ---- spike wrote: > What's supernatural about vampires? -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From kanzure at gmail.com Fri Jan 22 23:05:42 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 22 Jan 2010 17:05:42 -0600 Subject: [ExI] EPOC EEG headset In-Reply-To: <4B5A2E37.1000308@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> Message-ID: <55ad6af71001221505s10424970xca462c7c25b38344@mail.gmail.com> On Fri, Jan 22, 2010 at 5:01 PM, Damien Broderick wrote: > On 1/22/2010 4:19 PM, ablainey at aol.com wrote: >> I Just had a go with it. I had to make a quick program to generate mouse >> clicks from some of the EPOC outputs. But now I can type with Dasher, >> Highlight the text, copy, change window, and paste the text all hands >> free. >> Dasher is slower than typing, but my spelling would be better with it! >> I estimated about 50 wpm on my first attempt, but that should increase >> with use. But it is hands free! > > How is this better than dictating to something like Dragon > NaturallySpeaking? It's marvelous, but so is typing with your elbow. No > great benefit unless your vocal chords and fingers have been removed or > disabled. (Is the advantage that it can be done silently? That could be > important on a plane, say, or in a crowded room. If you could do this > without being arrested as a Terrorist with a Device.) In general, that's what I've been trying to figure out: what advantages does EEG actually give you, and in particular what are the future prospects? I have had an absurdly hard time tracking down predictions in the scholarly literature. There's absolutely nothing that says "we predict an improvement of 10 to 50 wpm per electrode/channel of data recovery". Now, typing might not be the best application of EEG, and there's always the 6-axis arms to consider. But that doesn't really excite me as much for some reason. At least wpm is measurable performance metric. Are there any EEG headsters on the list that can provide some PDF files for me to read through? Here's what I've been reading in the past few months: http://designfiles.org/papers/neuro/eeg/ - Bryan http://heybryan.org/ 1 512 203 0507 From jameschoate at austin.rr.com Fri Jan 22 23:05:42 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Fri, 22 Jan 2010 23:05:42 +0000 Subject: [ExI] vampires In-Reply-To: <4B5A2D41.6010602@satx.rr.com> Message-ID: <20100122230542.FZZE7.374991.root@hrndva-web28-z01> Then you're reading comprehension is having a bad day off work. Preternatural includes supernatural, not the other way around. Better re-read that definition and consider what the prefix ab- means. ---- Damien Broderick wrote: > On 1/22/2010 4:08 PM, jameschoate at austin.rr.com wrote: > > > I'll have to disagree here, preternatural means 'abnormal' and does not carry necessarily any connotation of transcendence. Supernatural however does as in 'beyond nature'. > > That's what I just said. Vampires are preternatural, not supernatural. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From pharos at gmail.com Fri Jan 22 23:14:03 2010 From: pharos at gmail.com (BillK) Date: Fri, 22 Jan 2010 23:14:03 +0000 Subject: [ExI] EPOC EEG headset In-Reply-To: <4B5A2E37.1000308@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> Message-ID: On 1/22/10, Damien Broderick wrote: > How is this better than dictating to something like Dragon > NaturallySpeaking? It's marvelous, but so is typing with your elbow. No > great benefit unless your vocal chords and fingers have been removed or > disabled. (Is the advantage that it can be done silently? That could be > important on a plane, say, or in a crowded room. If you could do this > without being arrested as a Terrorist with a Device.) > > You mean like this? Quote: A US Airways passenger plane was diverted to Philadelphia on Thursday after a religious item worn by a Jewish passenger was mistaken as a bomb, Philadelphia police said. A passenger was alarmed by the phylacteries, religious items which observant Jews strap around their arms and heads as part of morning prayers, on the flight from New York's La Guardia airport heading to Louisville. Phylacteries, called tefillin in Hebrew, are two small black boxes with black straps attached to them. Observant Jewish men are required to place one box on their head and tie the other one on their arm each weekday morning. ---------- What with this and the new underpants security checks, flying is getting really tedious. BillK From thespike at satx.rr.com Fri Jan 22 23:40:01 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 17:40:01 -0600 Subject: [ExI] EPOC EEG headset In-Reply-To: References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> Message-ID: <4B5A3751.4030908@satx.rr.com> On 1/22/2010 5:14 PM, BillK wrote: > > Quote: > A US Airways passenger plane was diverted to Philadelphia on Thursday > after a religious item worn by a Jewish passenger was mistaken as a > bomb, Philadelphia police said. > > A passenger was alarmed by the phylacteries Imagine how distraught they get when I whip out my E-meter for a bris I mean brisk mid-flight thetan clearing. Damien Broderick From spike66 at att.net Fri Jan 22 23:20:15 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 15:20:15 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><9711229417674894B95766237C126A85@spike><2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> Message-ID: <74E685E20B9E4F2D8F7AD4DD32827EC0@spike> > On Behalf Of spike > ... > > If even less powerful to the point of being only a bit more > mighty than a natural being, would it then be known as a mitemighty? > > {8^D spike No wait, that would be a being who had supernatural powers but not all the time. The powers would mysteriously turn on and off as if by some stochastic process, a cosmic roll of the dice determining if the supernatural powers were there or not. Or am I getting it confused with a might-mighty? What if a supernatural being had one millionth the powers ordinarily assigned to god? Would that be a micromighty? And if a thousanth, a millimighty? spike (Jeff! Damien! Do let us have a rip roaring language abuse party me lads, like we used to do back in the 90s. Others, do pitch into our verbal pillow fight!) From spike66 at att.net Fri Jan 22 23:50:39 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 15:50:39 -0800 Subject: [ExI] EPOC EEG headset In-Reply-To: <55ad6af71001221505s10424970xca462c7c25b38344@mail.gmail.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com><55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com><8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com><8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com><4B5A2E37.1000308@satx.rr.com> <55ad6af71001221505s10424970xca462c7c25b38344@mail.gmail.com> Message-ID: > ...On Behalf Of Bryan Bishop ... > Subject: Re: [ExI] EPOC EEG headset > ... > In general, that's what I've been trying to figure out: what > advantages does EEG actually give you, and in particular what > are the future prospects?... - Bryan The use of an EEG for typing would be just as a technology demonstration since we have better ways to get thoughts to text. But the demo will perhaps get people thinking of possible uses of mind-machine interfaces. The one that came to mind with me is to use the technology for sex machines. I wasn't thinking of enabling the quadruplegic to masturbate, but rather to have an android or estroid partner respond to one's mental state, or possibly verbal commands, or both. Of course if we get sex machines that can read our minds and do what we want them to do, it means the extinction of humanity by failure to procreate. But at least the last generation will really enjoy themselves. Just thinking of the pile of money to be made by the first company to develop a mind reading sex machine sends chills up and down my spine. spike From pharos at gmail.com Fri Jan 22 23:56:16 2010 From: pharos at gmail.com (BillK) Date: Fri, 22 Jan 2010 23:56:16 +0000 Subject: [ExI] EPOC EEG headset In-Reply-To: <4B5A3751.4030908@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> <4B5A3751.4030908@satx.rr.com> Message-ID: On 1/22/10, Damien Broderick wrote: > Imagine how distraught they get when I whip out my E-meter for a bris I > mean brisk mid-flight thetan clearing. > > Now you wouldn't really attempt a bris on a plane, would you? There must be a law against it somewhere. Brings tears to my eyes. BillK From thespike at satx.rr.com Sat Jan 23 00:06:54 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 18:06:54 -0600 Subject: [ExI] EPOC EEG headset In-Reply-To: References: <4650F17F2E264B828808D74CB62D3AEB@spike> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> <4B5A3751.4030908@satx.rr.com> Message-ID: <4B5A3D9E.7090903@satx.rr.com> On 1/22/2010 5:56 PM, BillK wrote: > Now you wouldn't really attempt a bris on a plane, would you? > There must be a law against it somewhere. Of course I would, but it take a lot longer now that we have to use those terror-safe blunt plastic knifes. Damien Broderick From thespike at satx.rr.com Sat Jan 23 00:14:01 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 18:14:01 -0600 Subject: [ExI] cosmological model gets it mostly right Message-ID: <4B5A3F49.6090709@satx.rr.com> Galaxies shaped by dark past Friday, 22 January 2010 by Heather Catchpole Cosmos Online SYDNEY: Using a detailed cosmological model that includes dark energy and dark matter, two American astrophysicists have been able to correctly predict the shapes and proportions of the different types of galaxies in the universe and discover the Milky Way?s past. The shape of galaxies depends on their turbulent history, and understanding how they evolve is a major task. Astrophysicists Nick Devereux of Embry-Riddle University in Arizona and Andrew Benson of the California Institute of Technology used a sophisticated computer model called GALFORM, combined with data from the infrared Two Micron All Sky Survey, which scanned 70% of the sky between 1997 and 2001. Researchers were ?completely astonished? GALFORM simulates galaxy formation in a universe dominated by the enigmatic dark energy and dark matter. It?s based on a cosmological model of the universe called the Lambda Cold Dark Matter (LCDM) that predicts how matter flows and lumps together. The Lambda component is represents ?dark energy?, which drives the expansion of the universe. The model was able to reproduce the evolutionary history of the universe over its 13.7 billion years. Moreover it not only got the shapes but also the numbers of various galaxies right and the rate at which galaxy mergers occur. ?We were completely astonished that our model predicted both the abundance and diversity of galaxy types so precisely,? said astrophysicist Nick Devereux of Embry-Riddle University in Arizona. ?It really boosts my confidence in the model,? said Benson. Shapely galaxies If galaxies are close enough together, then gravity can cause them to merge, with spiral galaxies morphing to elliptical galaxies. The Milky Way and its neighbour Andromeda are close enough that this will happen,. Benson and Devereaux said that their model, published in the Monthly Notices of the Royal Astronomical Society shows that the Milky Way has a complex past but so far has only undergone minor collisions and the gravitational collapse of its inner disk to form the central bar. A galaxy?s shape depends on how it formed ? and can vary from elliptical and lens shapes to spirals. Our own Milky Way galaxy is classified as a barred spiral. American astronomer Edwin Hubble defined these ranges of galactic shapes as the ?Hubble sequence?. They appear as elliptical blobs, or spiral disks with circles or bars at the centre. Our own Milky Way is classified as a barred spiral. But the story of how the shapes arise is incredibly complex, so much so that it stretches the limit of current computing capacity. To understand it, astrophysicists use analytical models that can give an approximation of the physics involved in everything from the evolution of stars to the merging of entire galaxies. Benson and Devereaux were able to predict the shapes and proportions of galaxies with buldges, bulges and discs or just discs. Model predicts too many dwarves Australian astrophysicist Geraint Lewis from the University of Sydney says while the results are very encouraging, there?s still a few holes in the LDCM model, which the authors acknowledge. ?These guys have refined the recipe ? what?s coming out is not only elliptical and spiral galaxies but the right proportion of these galaxies ? which is very encouraging, but it?s not the end of the answer,? Lewis said. While the model is working well large scales, it predicts a greater number of dwarf galaxies (small galaxies like the nearby Magellanic Clouds) than we actually observe, says Lewis. ?The [dwarf galaxies] are inconsequential in some way but the number of them is important. If your recipe was right you should get dark blobs with nothing in, but if these are not there, it?s a problem for the LCDM.? [[comment: obviously they've been harvested...]] From p0stfuturist at yahoo.com Fri Jan 22 23:16:31 2010 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 22 Jan 2010 15:16:31 -0800 (PST) Subject: [ExI] run like the wind Message-ID: <425843.19260.qm@web59901.mail.ac4.yahoo.com> Humans could perhaps run as fast 40 mph, a new study suggests. Such a feat would leave in the dust the world's fastest runner, Usain Bolt, who has clocked nearly 28 mph in the 100-meter sprint. The new findings come after researchers took a new look at the factors that limit human speed. Their conclusions? The top speed humans could reach may come down to how quickly muscles in the body can move. Previous studies have suggested the main hindrance to speed is that our limbs can only take a certain amount of force when they strike the ground. This may not be the whole story, however. "If one considers that elite sprinters can apply peak forces of 800 to 1,000 pounds with a single limb during each sprinting step, it's easy to believe that runners are probably operating at or near the force limits of their muscles and limbs," said Peter Weyand of Southern Methodist University, one of the study's authors. But Weyand and colleagues found in treadmill tests that our limbs can handle a lot more force than what is applied during top-speed running. What really holds us back Their results showed the critical biological limit is imposed by time - specifically, the very brief periods of time available to apply force to the ground while sprinting. In elite sprinters, foot-ground contact times are less than one-tenth of a second, and peak ground forces occur within less than one-twentieth of that second for the first instant of foot-ground contact. To figure out what limits how fast we can run, the researchers used a high-speed treadmill equipped to precisely measure the forces applied to its surface with each footfall. Study participants then ran on the treadmill using different gaits, including hopping, and running forward and backwards as fast as they possibly could. The ground forces applied while hopping on one leg at top speed exceeded those applied during top-speed forward running by 30 percent or more. That suggests our limbs can handle greater forces than those found for two-legged running at top speeds. And although top backward speed was substantially slower than top forward speed, as expected, the minimum periods of foot-ground contact at top backward and forward speeds were essentially identical. The fact that these two drastically different running styles had such similar intervals for foot-ground contact suggest that there is a physical limit to how fast your muscle fibers can work to get your feet off the ground, the researchers say. New speed limit The new work shows that running speed limits are set by the contractile speed limits of the muscle fibers themselves, with fiber contractile speeds setting the limit on how quickly the runner's limb can apply force to the running surface. "Our simple projections indicate that muscle contractile speeds that would allow for maximal or near-maximal forces would permit running speeds of 35 to 40 miles per hour and conceivably faster," Bundle said. While 40 mph may not impress the cheetah, the world's fastest land animal reaching speeds of 70 mph (112 kph), it's enough to escape a grizzly bear and much quicker than T. rex, which may have reached 18 mph (29 kph) during a good jog. The results were published in the Jan. issue of the journal Journal of Applied Physiology. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 23 00:40:44 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 18:40:44 -0600 Subject: [ExI] vampires In-Reply-To: <20100122230542.FZZE7.374991.root@hrndva-web28-z01> References: <20100122230542.FZZE7.374991.root@hrndva-web28-z01> Message-ID: <4B5A458C.5020104@satx.rr.com> On 1/22/2010 5:05 PM, jameschoate at austin.rr.com wrote: > Then you're reading comprehension is having a bad day off work. Sir, if you believe that, you'll believe anything, as the Duke of Wellington was once obliged to remark. By the way, the word you were looking for is "your." From mbb386 at main.nc.us Sat Jan 23 00:57:31 2010 From: mbb386 at main.nc.us (MB) Date: Fri, 22 Jan 2010 19:57:31 -0500 (EST) Subject: [ExI] EPOC EEG headset In-Reply-To: <4B5A3D9E.7090903@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> <4B5A3751.4030908@satx.rr.com> <4B5A3D9E.7090903@satx.rr.com> Message-ID: <35997.12.77.168.178.1264208251.squirrel@www.main.nc.us> > On 1/22/2010 5:56 PM, BillK wrote: > >> Now you wouldn't really attempt a bris on a plane, would you? >> There must be a law against it somewhere. > > Of course I would, but it take a lot longer now that we have to use > those terror-safe blunt plastic knifes. > > Damien Broderick > Arrrgh. I am totally squicked! :( :( :( Regards, MB From thespike at satx.rr.com Sat Jan 23 01:09:45 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 19:09:45 -0600 Subject: [ExI] EPOC EEG headset In-Reply-To: <35997.12.77.168.178.1264208251.squirrel@www.main.nc.us> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> <4B5A3751.4030908@satx.rr.com> <4B5A3D9E.7090903@satx.rr.com> <35997.12.77.168.178.1264208251.squirrel@www.main.nc.us> Message-ID: <4B5A4C59.8030000@satx.rr.com> On 1/22/2010 6:57 PM, MB wrote: > Arrrgh. I am totally squicked! :( :( :( Understandably, but imagine how your thetan feels when I use a Homeland Security-approved plastic E-meter. Damien Broderick From ablainey at aol.com Sat Jan 23 01:11:11 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 20:11:11 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <4B5A2E37.1000308@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com><55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com><8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> Message-ID: <8CC69E015E604D5-B38-37F7@webmail-m034.sysops.aol.com> I was looking at it from a rehab and occuptional therapy point of view fro use by the disabled. Going on from that, as you ask. For the average person what is the advantage compared to dictation software? The silence could be very useful and ideal for travel. You could happily work on a plane without disturbing other passengers. However as already said, you can probably expect some kind of cavity search until the technology is common place. I haven't tried typing yet using only thought activation. There may be some speed benifit after practicing for a while. Although I doubt it? I'll let you know. -----Original Message----- From: Damien Broderick How is this better than dictating to something like Dragon NaturallySpeaking? It's marvelous, but so is typing with your elbow. No great benefit unless your vocal chords and fingers have been removed or disabled. (Is the advantage that it can be done silently? That could be important on a plane, say, or in a crowded room. If you could do this without being arrested as a Terrorist with a Device.) Damien Broderick -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Sat Jan 23 01:13:12 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 20:13:12 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: References: <4650F17F2E264B828808D74CB62D3AEB@spike><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com><55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com><8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com><8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com><4B5A2E37.1000308@satx.rr.com><4B5A3751.4030908@satx.rr.com> Message-ID: <8CC69E05E1E7ACC-B38-3849@webmail-m034.sysops.aol.com> There are probably as many laws protecting your right to do it. You wouldn't want any turbulance on that flight ! (cringe) -----Original Message----- From: BillK Now you wouldn't really attempt a bris on a plane, would you? There must be a law against it somewhere. Brings tears to my eyes. BillK -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Sat Jan 23 01:14:54 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 20:14:54 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <4B5A3D9E.7090903@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> <4B5A3751.4030908@satx.rr.com> <4B5A3D9E.7090903@satx.rr.com> Message-ID: <8CC69E099DC7645-B38-3889@webmail-m034.sysops.aol.com> ooooooooooooooooh crossing legs in horror. That's gotta smart! -----Original Message----- From: Damien Broderick To: ExI chat list Sent: Sat, 23 Jan 2010 0:06 Subject: Re: [ExI] EPOC EEG headset On 1/22/2010 5:56 PM, BillK wrote: > Now you wouldn't really attempt a bris on a plane, would you? > There must be a law against it somewhere. Of course I would, but it take a lot longer now that we have to use those terror-safe blunt plastic knifes. Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat = -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 23 01:30:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jan 2010 19:30:11 -0600 Subject: [ExI] terror laws bris In-Reply-To: <8CC69E099DC7645-B38-3889@webmail-m034.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> <4B5A3751.4030908@satx.rr.com> <4B5A3D9E.7090903@satx.rr.com> <8CC69E099DC7645-B38-3889@webmail-m034.sysops.aol.com> Message-ID: <4B5A5123.8060804@satx.rr.com> On 1/22/2010 7:14 PM, ablainey at aol.com wrote: > ooooooooooooooooh crossing legs in horror. That's the last thing you want to do after an operation like that. Riding a bike too soon after a vasectomy was bad enough. Damien Broderick From sjatkins at mac.com Sat Jan 23 01:52:59 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 22 Jan 2010 17:52:59 -0800 Subject: [ExI] EPOC EEG headset In-Reply-To: <4B5A2E37.1000308@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> Message-ID: <8C7330F4-D69D-4AE9-BCF2-071BE2E12647@mac.com> On Jan 22, 2010, at 3:01 PM, Damien Broderick wrote: > On 1/22/2010 4:19 PM, ablainey at aol.com wrote: > >> I Just had a go with it. I had to make a quick program to generate mouse >> clicks from some of the EPOC outputs. But now I can type with Dasher, >> Highlight the text, copy, change window, and paste the text all hands free. >> Dasher is slower than typing, but my spelling would be better with it! >> I estimated about 50 wpm on my first attempt, but that should increase >> with use. But it is hands free! > > How is this better than dictating to something like Dragon NaturallySpeaking? It's marvelous, but so is typing with your elbow. No great benefit unless your vocal chords and fingers have been removed or disabled. (Is the advantage that it can be done silently? That could be important on a plane, say, or in a crowded room. If you could do this without being arrested as a Terrorist with a Device.) It is better when you don't want to be mumbling to yourself or banging about with your elbow as you mention. This is often the case in a meeting or talking with someone. Not to mention that it is a further baby step on the path to a direct mind-computer interface. -s From sjatkins at mac.com Sat Jan 23 02:05:51 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 22 Jan 2010 18:05:51 -0800 Subject: [ExI] EPOC EEG headset In-Reply-To: References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> <55ad6af71001221505s10424970xca462c7c25b38344@mail.gmail.com> Message-ID: <8D19AA96-54BA-42D6-A818-98E93B11D0AC@mac.com> On Jan 22, 2010, at 3:50 PM, spike wrote: >> ...On Behalf Of Bryan Bishop > ... >> Subject: Re: [ExI] EPOC EEG headset >> ... >> In general, that's what I've been trying to figure out: what >> advantages does EEG actually give you, and in particular what >> are the future prospects?... - Bryan > > The use of an EEG for typing would be just as a technology demonstration > since we have better ways to get thoughts to text. But the demo will > perhaps get people thinking of possible uses of mind-machine interfaces. > The one that came to mind with me is to use the technology for sex machines. I suppose in the middle of online sex one's hands may be busy with other things than typing and the other noises might mess up speech to text. :) But then I would think the brain lighting up sexually would give EEG typing fits too. - samantha From lacertilian at gmail.com Sat Jan 23 01:35:29 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 22 Jan 2010 17:35:29 -0800 Subject: [ExI] Extinction By Failure To Procreate Message-ID: 2010-01-22, spike : > Of course if we get sex machines that can read our minds and do what we want > them to do, it means the extinction of humanity by failure to procreate. > But at least the last generation will really enjoy themselves. ?Just > thinking of the pile of money to be made by the first company to develop a > mind reading sex machine sends chills up and down my spine. Maybe you've already seen this, but: http://www.bohemiandrive.com/comics/npwil/1.html It isn't my custom to distribute pure entertainment materials through Extropy-Chat. In this case, my hand was forced! From msd001 at gmail.com Sat Jan 23 02:43:47 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 22 Jan 2010 21:43:47 -0500 Subject: [ExI] heaves a long broken psi In-Reply-To: <74E685E20B9E4F2D8F7AD4DD32827EC0@spike> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <74E685E20B9E4F2D8F7AD4DD32827EC0@spike> Message-ID: <62c14241001221843i4d8f46t951e5761c1112841@mail.gmail.com> On Fri, Jan 22, 2010 at 6:20 PM, spike wrote: > > >> On Behalf Of spike >> ... >> >> If even less powerful to the point of being only a bit more >> mighty than a natural being, would it then be known as a mitemighty? >> >> {8^D ?spike > > > No wait, that would be a being who had supernatural powers but not all the > time. ?The powers would mysteriously turn on and off as if by some > stochastic process, a cosmic roll of the dice determining if the > supernatural powers were there or not. ?Or am I getting it confused with a > might-mighty? > > What if a supernatural being had one millionth the powers ordinarily > assigned to god? ?Would that be a micromighty? ?And if a thousanth, a > millimighty? that earlier post was confused with a yeasty spread god the all-vegemitemighty and the indeterminately godlike is a maybemighty i believe small scale gods are nanomighty JJ Walker was dyno-mighty From spike66 at att.net Sat Jan 23 02:17:14 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 18:17:14 -0800 Subject: [ExI] EPOC EEG headset In-Reply-To: <8CC69E015E604D5-B38-37F7@webmail-m034.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com><55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com><8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com><4B5A2E37.1000308@satx.rr.com> <8CC69E015E604D5-B38-37F7@webmail-m034.sysops.aol.com> Message-ID: <2732CF33DEBA43178A5FCD772EF05A19@spike> ...On Behalf Of ablainey at aol.com Subject: Re: [ExI] EPOC EEG headset >...I was looking at it from a rehab and occuptional therapy point of view fro use by the disabled... So often we see these kinds of enhancements as applications for the disabled. But that market isn't all that large, and the average amount of money available per capita in the disabled community may be lower than in the general population. On the other hand, consider the socially disabled community, which would include those who are too busy for romance, or the obsessive compulsives, those whose minds work in a delightfully different way, the ugly, the 3 sigma fat or skinny, the psychopaths, leprosy victims, the MENSAs, the list goes on and on. The socially challenged community is waaaay bigger than the disabled, and the amount of disposable cash per capita might actually be larger on average than the normal community. We need to harness the EPOC EEG technology to make better sex machines. In the mean time, I thought of a simpler version: a video game or Second Life-like sim that is specifically designed to be titillating. We could use feedback from the EPOC to adjust what it sends one's way. Actually the EPOC mind reading thing might be overkill for that application, since simpler instrumentation elsewhere might be more easily arranged. I could even design such instrumentation methinks. spike From spike66 at att.net Sat Jan 23 02:52:06 2010 From: spike66 at att.net (spike) Date: Fri, 22 Jan 2010 18:52:06 -0800 Subject: [ExI] EPOC EEG headset In-Reply-To: <8D19AA96-54BA-42D6-A818-98E93B11D0AC@mac.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com><55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com><8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com><8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com><4B5A2E37.1000308@satx.rr.com><55ad6af71001221505s10424970xca462c7c25b38344@mail.gmail.com> <8D19AA96-54BA-42D6-A818-98E93B11D0AC@mac.com> Message-ID: Ablainey, I do thank you for introducing such an interesting thread. >...On Behalf Of Samantha Atkins > ... > > The one that came to mind with me is to use the technology > for sex machines. > > I suppose in the middle of online sex one's hands may be busy > with other things than typing and the other noises might mess > up speech to text. :)... Ja! That's the spirit Samantha. > ...But then I would think the brain > lighting up sexually would give EEG typing fits too... - samantha Brain lighting up sexually, I like that imagery, sounds like FIRE. The text so generated might be a lot of fun to read afterwards however. I did start thinking about the details of both the instrumentation and the software, wondering: if we put together the right talent could we make something marketable. Our company could even have a theme song, if we modify the words of that rock and roll guy who was slain by one of his own fans, whats his name, John Lenin? The song would go something like this: Imagine no sex partners, It isn't hard to do, They're all out with others, More normal than me or you; Imagine all the the people, who need a sexbot to screwww, booo hoooo hoo hoo... You may say I'm a dreamer, But I'm not the only one, I hope some day you'll join us, The cash we make will weigh a ton. spike From msd001 at gmail.com Sat Jan 23 03:08:36 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 22 Jan 2010 22:08:36 -0500 Subject: [ExI] vampires In-Reply-To: <20100122230217.16F2E.374954.root@hrndva-web28-z01> References: <4DD956D3B9D84B4295DC477B5DA22E04@spike> <20100122230217.16F2E.374954.root@hrndva-web28-z01> Message-ID: <62c14241001221908r203ef437t92a1b22fb3c7e453@mail.gmail.com> On Fri, Jan 22, 2010 at 6:02 PM, wrote: > There is a good point to be made here, that is seldom actually made. > > Existence trumps interpretation. Therefore if vampires did exist they by definition would be natural. The term/phrase 'natural' is a anthropocentricism and actually has no significance. If anything it's a clear indicator, though unintended, of our limited thought process. > > ---- spike wrote: > >> What's supernatural about vampires? if the insignificant anthropocentricism is removed then the question becomes: What's super about vampires? I think that's a much different question. I'm not sure how to even research it, but I have a feeling vampires evolved out of a really good cover story for some lost-to-antiquity infidelity. A woman explains to her husband that the "bites" on her neck were from some bloodlusting creature that entered through the window and nearly consumed her essence till she was faint and nearly killing her, then he flew off into the night. Today we call those bites a hicky. The blood is vitality, the lusting obvious. Surely he entered. She likely nears the literary little death each time he visits. With nothing but bats flapping in the moonlit sky, there is no man to accept blame. It's the (nearly) perfect crime. Of course this leads to an addiction and dependence, so the story continues to grow more outlandish over time. I think witches riding broomsticks in front of the full moon are a similar cover story for common rated-R activity. And that whole "only virgins can see unicorns" malarkey - please... From msd001 at gmail.com Sat Jan 23 03:57:42 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 22 Jan 2010 22:57:42 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <2732CF33DEBA43178A5FCD772EF05A19@spike> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> <8CC69E015E604D5-B38-37F7@webmail-m034.sysops.aol.com> <2732CF33DEBA43178A5FCD772EF05A19@spike> Message-ID: <62c14241001221957m43ac9ffap1eab7e8988f77cd3@mail.gmail.com> On Fri, Jan 22, 2010 at 9:17 PM, spike wrote: > So often we see these kinds of enhancements as applications for the > disabled. ?But that market isn't all that large, and the average amount of > money available per capita in the disabled community may be lower than in > the general population. > > In the mean time, I thought of a simpler version: a video game or Second > Life-like sim that is specifically designed to be titillating. ?We could use > feedback from the EPOC to adjust what it sends one's way. ?Actually the EPOC > mind reading thing might be overkill for that application, since simpler > instrumentation elsewhere might be more easily arranged. ?I could even > design such instrumentation methinks. I'd like to have window focus follow my intention. A coworker currently employs 4 monitors for his desktop. I am still fairly content with two, but i'm also managing stacks of windows on each. It might be a nice speed improvement to keep hands on keyboard and replace mouse actions (which remove the hands from keyboard) with window-management thinking. multidimensional modeling and visualization might be aided by 'thinking' of a rotation rather than having to remember which keys turn which direction in each dimension. From ablainey at aol.com Sat Jan 23 04:08:20 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 23:08:20 -0500 Subject: [ExI] vampires In-Reply-To: <62c14241001221908r203ef437t92a1b22fb3c7e453@mail.gmail.com> References: <4DD956D3B9D84B4295DC477B5DA22E04@spike><20100122230217.16F2E.374954.root@hrndva-web28-z01> <62c14241001221908r203ef437t92a1b22fb3c7e453@mail.gmail.com> Message-ID: <8CC69F8D51B7AE1-2468-160CD@webmail-d071.sysops.aol.com> An odd topic for discussion, but ok i'll bite :o= (that's a vampire smiley) As I understand it if a human consumes blood. A percentage is absopted into the bloodstream. Presumably through the intestinal wall having passed through the stomach. Now with the vampire tales of old being about vampires preferencially draining the blood of infants. This leads me to wonder whether said vampire would ingest and absorb stem cells which are in greater abundence in an infant. These stem cells once in the bloodstreem would presumable migrate to areas of damage. Thus repairing the vampires body with short telomere cells much more youthful in nature to the vampires own ? The nutrient content of the blood would make it a perfect meal anyway. I would agree that the most probably cause of the myth was to cover some kind of questionable activity, or possible to prevent it amongst a society. -----Original Message----- From: Mike Dougherty From ablainey at aol.com Sat Jan 23 04:10:48 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 22 Jan 2010 23:10:48 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <62c14241001221957m43ac9ffap1eab7e8988f77cd3@mail.gmail.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com><1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com><8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com><55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com><8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com><8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com><4B5A2E37.1000308@satx.rr.com><8CC69E015E604D5-B38-37F7@webmail-m034.sysops.aol.com><2732CF33DEBA43178A5FCD772EF05A19@spike> <62c14241001221957m43ac9ffap1eab7e8988f77cd3@mail.gmail.com> Message-ID: <8CC69F92D43EC4A-2468-1610B@webmail-d071.sysops.aol.com> hmmm, I think I might be able to knock something up to do that. ill have a play with the idea tommorow. -----Original Message----- From: Mike Dougherty I'd like to have window focus follow my intention. A coworker currently employs 4 monitors for his desktop. I am still fairly content with two, but i'm also managing stacks of windows on each. It might be a nice speed improvement to keep hands on keyboard and replace mouse actions (which remove the hands from keyboard) with window-management thinking. multidimensional modeling and visualization might be aided by 'thinking' of a rotation rather than having to remember which keys turn which direction in each dimension. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From eschatoon at gmail.com Sat Jan 23 08:03:16 2010 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Sat, 23 Jan 2010 09:03:16 +0100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100122181804.5.qmail@syzygy.com> References: <64697.50664.qm@web36501.mail.mud.yahoo.com> <20100122181804.5.qmail@syzygy.com> Message-ID: <1fa8c3b91001230003s665a3d80lbaa409ecf28f336e@mail.gmail.com> Thanks Eric for this great one-liner, which explains a lot if things. On Fri, Jan 22, 2010 at 7:18 PM, Eric Messick wrote: > And yes, a digital simulation of a person should enjoy eating a > digital simulation of an apple. ?If not, it's not a very good > simulation. > > -eric -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From eschatoon at gmail.com Sat Jan 23 08:07:18 2010 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Sat, 23 Jan 2010 09:07:18 +0100 Subject: [ExI] EPOC EEG headset In-Reply-To: <4B5A2E37.1000308@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <55ad6af71001202102u5176ae89ma164545fa47f2069@mail.gmail.com> <1fa8c3b91001220710i4f0eeb6eh7a86b4be20395fd0@mail.gmail.com> <8CC69A705A0A377-7028-13E2@webmail-d038.sysops.aol.com> <55ad6af71001221027v6ca88064sfe883018a41378d@mail.gmail.com> <8CC69AB128D812F-7028-1B5C@webmail-d038.sysops.aol.com> <8CC69C813A89117-5890-F102@webmail-m080.sysops.aol.com> <4B5A2E37.1000308@satx.rr.com> Message-ID: <1fa8c3b91001230007o3930b029hd16191348fe0f40@mail.gmail.com> At this moment, dictating to a speech recognition system is faster and better. But then, at the beginning of the 20th century, horses were faster and cheaper than cars. I am confident that in this decade consumer BCI devices will improve to permit a much faster text input to computer systems. But of course, their unique potential is for non-text input. On Sat, Jan 23, 2010 at 12:01 AM, Damien Broderick wrote: > On 1/22/2010 4:19 PM, ablainey at aol.com wrote: > >> I Just had a go with it. I had to make a quick program to generate mouse >> clicks from some of the EPOC outputs. But now I can type with Dasher, >> Highlight the text, copy, change window, and paste the text all hands >> free. >> Dasher is slower than typing, but my spelling would be better with it! >> I estimated about 50 wpm on my first attempt, but that should increase >> with use. But it is hands free! > > How is this better than dictating to something like Dragon > NaturallySpeaking? It's marvelous, but so is typing with your elbow. No > great benefit unless your vocal chords and fingers have been removed or > disabled. (Is the advantage that it can be done silently? That could be > important on a plane, say, or in a crowded room. If you could do this > without being arrested as a Terrorist with a Device.) > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From stathisp at gmail.com Sat Jan 23 10:15:33 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 23 Jan 2010 21:15:33 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <844360.88902.qm@web36505.mail.mud.yahoo.com> References: <844360.88902.qm@web36505.mail.mud.yahoo.com> Message-ID: On 23 January 2010 00:23, Gordon Swobe wrote: > --- On Fri, 1/22/10, Stathis Papaioannou wrote: > >> At some point, there must be an association between a >> symbol and one of the special symbols which are generated by sensory >> data. Then the symbol is "grounded". > > You misunderstand symbol grounding. It's not about association of symbols with other symbols, per se. It's about comprehension of those symbols. > > The good people at Merriam Webster associate words with other words on paper and then publish those printed associations. Those words and their associated words are grounded only to the extent that some agent(s) comprehends the meanings of them. > > If every agent capable of comprehending word meanings died suddenly, they would leave behind dictionaries filled with ungrounded symbols. The words defined in those dictionaries would remain physically associated the words in their definitions, but nobody would be around to know what any of the symbols meant. The words would remain associated but they would become ungrounded. > > http://en.wikipedia.org/wiki/Symbol_grounding That article actually says that symbol grounding *is* possible in a computer with external input. However, it goes on to say that maybe this is necessary but not sufficient for meaning: maybe something else is needed for that. And that is what we have been debating. It seems to me that there is no basis for claiming that meaning is something over and above symbol grounding, to be provided by a mysterious and undetectable consciousness. Some philosophers and scientists react to the idea by saying that consciousness does not exist, but that is going too far: consciousness does exist, but it doesn't exist as something over and above the information processing underpinning it. Consciousness is just what happens when your brain processes information, and there is no reason to assume that it doesn't also happen if another brain, whatever its substrate, also processes information in the same way. However, I admit that it isn't immediately obvious that consciousness *must* happen in a brain designed with the same function but on a different substrate. That is why I have assumed for the sake of argument that consciousness and observable behaviour can be separated, as you suggested. This idea then leads to the possibility that you could be zombified and not realise it, which I think is absurd. You have agreed that it is absurd, so absurd that you could hardly stand to think about it. You have also not brought up any valid objection to the reasoning whereby the possibility of zombie brain components leads to this absurdity (initially you said that components which behave just like natural components would not behave just like natural components, but I take it you now see that this is not a valid objection). So, given this, can I now assume that you now agree with me that it is *not* possible to separate consciousness from brain behaviour? You haven't said so explicitly, but your latter responses imply it. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 23 10:27:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 23 Jan 2010 21:27:47 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <873824.16326.qm@web36504.mail.mud.yahoo.com> References: <873824.16326.qm@web36504.mail.mud.yahoo.com> Message-ID: On 23 January 2010 01:41, Gordon Swobe wrote: > --- On Thu, 1/21/10, Stathis Papaioannou wrote: > >>> To say "there is no logical pathway from a lump of >>> matter to meaning" is equivalent to saying that mind and >>> matter exist in separate realms. It seems then that you >>> really do want to espouse the mind/matter dualism handed >>> down to us from Descartes. >> >> I'm saying this to show where your assertion that syntax >> can't produce meaning leads. > > My assertion leads simply to a philosophy of mind in which the brain attaches meanings to symbols in some way that we do not yet fully understand. Nothing more. But this is unnecessary, at best. You could say we do understand how meaning is attached to symbols when they are finally attached to an environmental input. Only if you really *want* the brain to remain mysterious would you add the superfluous magical layer. > In the next step of our journey we must decide between monism and not-monism (usually dualism). I choose monism. > > Looks to me like the world is comprised of just one kind of stuff. Some configurations of that one stuff have conscious understanding of symbols. Most if not all other configurations of that stuff do not. Yes, but the claim that it is impossible for matter other than that in brains to produce consciousness is irrational. It may turn out that electronic circuits can't do it but that is a matter for scientific research; you have no *proof* that it is so, even if you keep your other claim that programs can't do it. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Jan 23 14:54:47 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 06:54:47 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100122181804.5.qmail@syzygy.com> Message-ID: <575387.49824.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/22/10, Eric Messick wrote: > Most thermostats installed today are digital simulations of > analog thermostats.? They manage to get the job done anyway. Modern thermostats contain digital circuitry but they do not equal digital simulations of analog thermostats. To see this, imagine that you have an instrument for scanning objects to create digital simulations. You scan an analog thermostat and observe the resulting simulation on your computer. You will not see a real digital thermostat appear on your computer screen. Instead you will see a digital simulation of a non-digital object. That simulation will not have the properties of the original; it will not have the capacity to regulate temperature in your room. At best that simulated analog thermostat can regulate simulated temperature in a simulated room that you also create on your computer, and then only as a digital simulation of an analog thermostat, not as a digital simulation of a digital thermostat. -gts From jonkc at bellsouth.net Sat Jan 23 15:45:10 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 23 Jan 2010 10:45:10 -0500 Subject: [ExI] psi in Nature. In-Reply-To: <4B59FA6B.7060400@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> <4B59FA6B.7060400@satx.rr.com> Message-ID: On Jan 22, 2010, Damien Broderick wrote: > Anticipated response: "Yeah, right, and what else have they published there in the last 36 years? BULLSHIT!" Thank you Damien, you saved me some time. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinhard.heil at googlemail.com Sat Jan 23 07:34:48 2010 From: reinhard.heil at googlemail.com (Reinhard H.) Date: Sat, 23 Jan 2010 08:34:48 +0100 Subject: [ExI] EPOC EEG headset In-Reply-To: <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> Message-ID: <3ffdac781001222334n37b86086ldacd0ea0b742ff1d@mail.gmail.com> hi alex, i do not believe that this device really read thought patterns. all the consumer devices i know read only the electric activity of the skull muscles. try the following: take a video of your face during using the device and take a close look at your face musculature. i'm sure (99,9%+) that you unconsciously move your face by trying to move the robot arm. the electric activity of this moving is very much higher than the activity of your brain (through the skull). the device read this activity and not the activity of your brain. best regards reinhard 2010/1/21 : > Hi Everyone. > Just thought i'd stop lurking to give you a link to a video I have just put > on Youtube. I have just got my hands on an EPOC headset. Which, for those > that don't know. It reads various thought patterns which can be interpreted > and used to trigger keyboard events etc. I have cobbled together a 5 axis > robot arm, switchboxes and a quick bit of software to read the keyboard > event which are output from the Epoc and the result is a brain controlled > robot arm. > I don't need to state the implications. > > http://www.youtube.com/watch?v=4Cq35VbRpTY > > Leave a comment, or let me know if you want to replicate the set up for your > self. > > All the best > Alex > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From jonkc at bellsouth.net Sat Jan 23 16:11:27 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 23 Jan 2010 11:11:27 -0500 Subject: [ExI] heaves a long broken psi. In-Reply-To: <4B5A0DBF.50104@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> Message-ID: <4C15C2DD-F22B-493E-A056-2BF09AA29D8A@bellsouth.net> On Jan 22, 2010, Damien Broderick wrote: > This misuses the word. If ESP is real, there is no reason to suppose that it functions by abrogating the laws of physics; far more economical to suppose that we do not yet fully understand all those laws. In the 19th century, the radioactive heating of the sun was not supernatural, just unexplained. There is an important difference. In the 19th century there was an excellent reason for thinking something important and fundamental is missing; according to the then known laws of physics the sun couldn't be older than 50 million years and was probably closer to 10. But even then it was known that life was far older than that, and it was known from geological evidence that the Earth itself is several billion years old. Something didn't fit, the Earth can't be older than the sun. Nobody has produced evidence that doesn't fit known laws in the ESP issue. Or at least nobody has if you don't count one 36 year old paper that both the present editors of Nature and the editors of 36 years ago, if they are still alive, would agree to put on their list of the top 5 most embarrassing papers Nature has ever published. EVER! John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 23 16:40:13 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 10:40:13 -0600 Subject: [ExI] psi in Nature. In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> <4B59FA6B.7060400@satx.rr.com> Message-ID: <4B5B266D.9060507@satx.rr.com> On 1/23/2010 9:45 AM, John Clark wrote: > Thank you Damien, you saved me some time. In other words, you make a demand, it's met, then you refuse to acknowledge this fact and change the rules. This is why I stopped debating the topic with you some years ago, so I really have to return to that policy. Others will draw their own conclusions. Damien Broderick From gts_2000 at yahoo.com Sat Jan 23 16:13:55 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 08:13:55 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <850539.98061.qm@web36507.mail.mud.yahoo.com> Let us say that a botanist writes a massive tome about a certain apple. His book contains every fact about this certain apple. He titles his book _A Complete Description Of A Certain Apple_. For his work on describing the apple, the botanist wins the Nobel Prize in Appleology. This catches the attention of Steve Jobs, who buys a copy of the book with the idea of charging one of his programmers with the task of creating the ultimate digital simulation of an apple. Jobs and his programmer will need anything more than the book that completely describes the apple. The Apple programmer will *translate* that book from the English language into his favorite programming language. Just as the original book that the programmer translated exist as a description of an apple, so too will the resulting digitally simulation exist as a description of the apple. That description will no more taste like the original apple than will the description in the botanist's book. Digital simulations of apples do no more than describe apples in the same way that the books about apples describe apples. And a digital simulation of a person eating an apple is likewise only a *description* of a person eating an apple. We can create digitally simulated persons eating digitally simulated apples in the same sense that Tolkien created a description of "Gandalf smoking a pipe in Middle Earth". We must as above bracket the proposition; that is, if we want to write about simulated people eating simulated apples then we must use scare-quotes and write that they "eat apples". We use the scare-quotes to indicate to the reader that we're not talking about reality. We hope the reader can understand the difference between reality and the things he imagines. -gts From thespike at satx.rr.com Sat Jan 23 16:44:37 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 10:44:37 -0600 Subject: [ExI] The digital nature of brains In-Reply-To: <575387.49824.qm@web36505.mail.mud.yahoo.com> References: <575387.49824.qm@web36505.mail.mud.yahoo.com> Message-ID: <4B5B2775.3090600@satx.rr.com> On 1/23/2010 8:54 AM, Gordon Swobe wrote: > You scan an analog thermostat and observe the resulting simulation on your computer. You will not see a real digital thermostat appear on your computer screen. Instead you will see a digital simulation of a non-digital object. Wrong. You see a digital representation of some aspects of the computer--perhaps its surface, perhaps some of its innards (for various values of "scan"). An emulation, by definition, is something that reproduces the effectivity of the original, not just its superficial appearance. Damien Broderick From thespike at satx.rr.com Sat Jan 23 16:53:23 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 10:53:23 -0600 Subject: [ExI] The digital nature of brains In-Reply-To: <4B5B2775.3090600@satx.rr.com> References: <575387.49824.qm@web36505.mail.mud.yahoo.com> <4B5B2775.3090600@satx.rr.com> Message-ID: <4B5B2983.1010902@satx.rr.com> On 1/23/2010 10:44 AM, Damien Broderick wrote: > Wrong. You see a digital representation of some aspects of the computer Oops. Of the thermometer. From femmechakra at yahoo.ca Sat Jan 23 16:30:12 2010 From: femmechakra at yahoo.ca (Anna Taylor) Date: Sat, 23 Jan 2010 08:30:12 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies. In-Reply-To: <4D7A30E3-CCFB-4EB4-8469-6E68E1184F53@bellsouth.net> Message-ID: <145800.58845.qm@web110406.mail.gq1.yahoo.com> --- On Fri, 1/22/10, John Clark wrote: > And what do you think the word > "theory" means, a guess? A theory is a group of > thoughts to explain something, such as Copernicus's > theory that the Earth goes around the sun > or?Newton's theory of gravity or the theory of > cause and effect. Some theories explain things better that > others and no theory explains things better than > Darwin's. I have to agree but a theory is still "a maybe". Sometimes a really good "maybe" and sometimes not so good such as Flat Earth hypothesis, others such as Aristotle's dynamic motion and The classical elemental theory. > I do give you credit for realizing that > Gordon's ideas are totally incompatible with > Darwin's, but joining the creationists camp seems like a > very high price to pay to embrace his looney > teachings. Thank you. I do believe that there was some sort of supernatural event as opposed to being or beings that caused creation and I believe in a "higher greater cosmic force". I'm not sure if that makes me creationist, I get confused with all the labels. > Yes it matters. Gordon says signals between > neurons are not involved in consciousness, that means there > are 100 billion completely independent entities in your head > with absolutely no way to interact with each other in any > way. And that is idiocy of the highest order. My apology. I thought he meant that neurons don't know that they are involved in consciousness. > That is not true at all, I don't need to > personally experience Psi, as I've said many many times > just show me a pro Psi article in Nature or Science, > that's all I ask. My point was that a theory, an idea or a thought may or may not be accurate, true or viable. That doesn't mean one day it won't be. You are probably right that there will never be an article in Nature or Science in our lifetime. Darwin's Theory has only been around for about 150 years maybe the next century the pros will deem it worthy to print;) Anna __________________________________________________________________ Looking for the perfect gift? Give the gift of Flickr! http://www.flickr.com/gift/ From gts_2000 at yahoo.com Sat Jan 23 17:06:54 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 09:06:54 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: <4B5B2775.3090600@satx.rr.com> Message-ID: <283479.58527.qm@web36504.mail.mud.yahoo.com> --- On Sat, 1/23/10, Damien Broderick wrote: >> You scan an analog thermostat and observe the >> resulting simulation on your computer. You will not see a >> real digital thermostat appear on your computer screen. >> Instead you will see a digital simulation of a non-digital >> object. > > Wrong. Show me how my digital simulation of the analog thermostat actually controls the temperature of any room in the real world. Looks to me like it can only regulate simulated temperature in a simulated room. To make it work here in reality, I need to wire it to a real room through some interface on my real computer not included in the real original object simulated. Now I have a true thermostat, which I simulate again. But the same problem arises: I create only another digital description of a thermostat -- and digital simulations of things do not equal the things they describe. Compare this to creating copies of digital objects like software in which the "simulation" really does have all the real-world functionality as the original. -gts From stefano.vaj at gmail.com Sat Jan 23 17:14:35 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 23 Jan 2010 18:14:35 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <610150.14412.qm@web36508.mail.mud.yahoo.com> Message-ID: <580930c21001230914xae85809u3cbc4a38dfc5de18@mail.gmail.com> On 22 January 2010 00:33, Stathis Papaioannou wrote: > Yes, but if it were possible to make brain components that function > like the brain in every way except lacking consciousness, then it > would be possible to arbitrarily remove any aspect of a persons > consciousness and they would not realise that anything had changed. Since no behaviours and functions would be excluded, a "zombie" would still be speaking with himself as we do (or he would not be a "perfect" zombie). Thus from all practical points of view he may well tell himself that he is conscious, and agree with himself on such conclusion even though he is not. The paradox lies of course in the fact that "consciousness" abstracted from the phenomena by which it is manifested is an empty concept, "not even wrong". -- Stefano Vaj From stefano.vaj at gmail.com Sat Jan 23 17:19:26 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 23 Jan 2010 18:19:26 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <758269.36155.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21001230919h314a017coe68b9bf06f4cc792@mail.gmail.com> On 22 January 2010 00:37, Stathis Papaioannou wrote: > 2010/1/22 Gordon Swobe : >> Some people seem to deny the existence of consciousness and thus their own experiences of life in what look to me like vain attempts to escape the conclusion that humans might have something computers do not have. I don't have much to say to them. > > I agree that these people can't really deny the existence of > consciousness. They must be meaning something other than what it looks > like, or being provocative. Yes, this may the reason why you two like so much discussing between you... :-) I assume that on the contrary most people do not consider consciousness as something much more special or "noumenically existing" than, say, sleep. -- Stefano Vaj From gts_2000 at yahoo.com Sat Jan 23 17:21:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 09:21:44 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies Message-ID: <124842.55801.qm@web36507.mail.mud.yahoo.com> I meant to write below that Jobs and his programmer will need [NOTHING] more than the book that completely describes the apple. The book about the apple describes the apple, and the digital simulation acts only as a translation of the book. No matter whether the description exists in book form or in digital form, it only describes the apple and descriptions of things do not equal the things they describe. -gts --- On Sat, 1/23/10, Gordon Swobe wrote: > From: Gordon Swobe > Subject: Re: [ExI] digital simulations, descriptions and copies > To: "ExI chat list" > Date: Saturday, January 23, 2010, 11:13 AM > Let us say that a botanist writes a > massive tome about a certain apple. His book contains every > fact about this certain apple. He titles his book _A > Complete Description Of A Certain Apple_. > > For his work on describing the apple, the botanist wins the > Nobel Prize in Appleology. This catches the attention of > Steve Jobs, who buys a copy of the book with the idea of > charging one of his programmers with the task of creating > the ultimate digital simulation of an apple. > > Jobs and his programmer will need anything more than the > book that completely describes the apple. The Apple > programmer will *translate* that book from the English > language into his favorite programming language. > > Just as the original book that the programmer translated > exist as a description of an apple, so too will the > resulting digitally simulation exist as a description of the > apple.? That description will no more taste like the > original apple than will the description in the botanist's > book. Digital simulations of apples do no more than describe > apples in the same way that the books about apples describe > apples. > > And a digital simulation of a person eating an apple is > likewise only a *description* of a person eating an apple. > > We can create digitally simulated persons eating digitally > simulated apples in the same sense that Tolkien created a > description of "Gandalf smoking a pipe in Middle Earth". > > We must as above bracket the proposition; that is, if we > want to write about simulated people eating simulated apples > then we must use scare-quotes and write that they "eat > apples". We use the scare-quotes to indicate to the reader > that we're not talking about reality. > > We hope the reader can understand the difference between > reality and the things he imagines. > > -gts > > > > > > > ? ? ? > From thespike at satx.rr.com Sat Jan 23 17:28:58 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 11:28:58 -0600 Subject: [ExI] The digital nature of brains In-Reply-To: <283479.58527.qm@web36504.mail.mud.yahoo.com> References: <283479.58527.qm@web36504.mail.mud.yahoo.com> Message-ID: <4B5B31DA.4030405@satx.rr.com> On 1/23/2010 11:06 AM, Gordon Swobe wrote: >>> >> You scan an analog thermostat and observe the >>> >> resulting simulation on your computer. You will not see a >>> >> real digital thermostat appear on your computer screen. >>> >> Instead you will see a digital simulation of a non-digital >>> >> object. >> > >> > Wrong. > > Show me how my digital simulation of the analog thermostat actually controls the temperature of any room in the real world. Looks to me like it can only regulate simulated temperature in a simulated room. Jesus Christ, how many people on this list can actually read? You snip out most of what I wrote as if I were disagreeing with your assertions about simulations, when obviously I was drawing attention to the difference between *functional simulations* and *superficial representations*: I doubt that my slip in typing "computer" when I meant "thermometer" has any bearing on your misprision. How about addressing my actual point? Damien Broderick From jonkc at bellsouth.net Sat Jan 23 17:41:41 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 23 Jan 2010 12:41:41 -0500 Subject: [ExI] Intelligence Consciousness and ESP (was: digital simulations, descriptions and copies) In-Reply-To: <850539.98061.qm@web36507.mail.mud.yahoo.com> References: <850539.98061.qm@web36507.mail.mud.yahoo.com> Message-ID: On Jan 23, 2010, Gordon Swobe wrote: > Let us say that a botanist writes a massive tome about a certain apple. [blah blah blah] Lets say, just for the sake of argument, that this thought experiment of yours (which I only skimmed over) was not as utterly worthless as the nineteen dozen other thought experiments you have dreamed up; what would a logical person conclude from that? He would conclude that there is something puzzling about the link between intelligence and consciousness. Would he therefore conclude that there is no such link? Absolutely not, the physical evidence of such a link was overwhelming even a century ago and with each passing year the proof just gets stronger. You are doing what Damien (incorrectly) accuses me of doing in the ESP matter, refusing to believe something even though there is a mountain of evidence showing it must be true because you can't figure out what mechanism it works by. If Gordon Swobe or even John Clark can't figure out how something could be it doesn't follow that the thing in question can not be; because in spite of everything the evidence still remains and if we can't explain the data then that's just tough. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 23 17:43:25 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 11:43:25 -0600 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <124842.55801.qm@web36507.mail.mud.yahoo.com> References: <124842.55801.qm@web36507.mail.mud.yahoo.com> Message-ID: <4B5B353D.6030709@satx.rr.com> On 1/23/2010 11:21 AM, Gordon Swobe wrote: > The book about the apple describes the apple, and the digital simulation acts only as a translation of the book. No matter whether the description exists in book form or in digital form, it only describes the apple and descriptions of things do not equal the things they describe. Ah, so you *almost* get it. But you still confuse yourself by conflating emulation and description. The Apple apple is just another description; it can't be an emulation. But a calculator represented on a computer monitor can be both, if it serves as a GUI to a series of algorithmic processes that emulate or in this case instantiate calculations. A child's plastic telephone might have buttons and a small device that pings, but all it does is represent some surface aspects of a real phone. But a phone image on a computer monitor can make real phone connections because it is the GUI to an actual digital phone system. A monitor display of a true human mind emulation would be an iconic representation of an immensely complex program copying the functionality of biological sensors, effectors and internal processors. Stop looking at the finger and consider what it's pointing to. Damien Broderick From stefano.vaj at gmail.com Sat Jan 23 17:51:32 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 23 Jan 2010 18:51:32 +0100 Subject: [ExI] Brains, Computers, AI and Uploading Message-ID: <580930c21001230951w32d675ecke839a988fad98aa2@mail.gmail.com> I have just happened to stumble on a brief essay which seems to offer the ultimate philosophical "mise ? point" on the debates having resurfaced on this list on the implementation of "intelligence", "consciousness" or specific individuals on platforms different from organic brains. There it is: http://www.fqxi.org/community/forum/topic/essay-download/596/__details/Wolfram_WhatIsUltimatelyPos_1.pdf -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 23 17:51:45 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 11:51:45 -0600 Subject: [ExI] Intelligence Consciousness and ESP In-Reply-To: References: <850539.98061.qm@web36507.mail.mud.yahoo.com> Message-ID: <4B5B3731.7010306@satx.rr.com> On 1/23/2010 11:41 AM, John Clark wrote: > You are doing what Damien (incorrectly) accuses me of doing in the ESP > matter, refusing to believe something even though there is a mountain of > evidence showing it must be true because you can't figure out what > mechanism it works by. If Gordon Swobe or even John Clark can't figure > out how something could be it doesn't follow that the thing in question > can not be; because in spite of everything the evidence still remains > and if we can't explain the data then that's just tough. Uncanny. Absolutely weird. Apart from the "(incorrectly)" up there, I could indeed have written that, because it's true and exactly to the point. Of course for those who refuse to look at the mountain of evidence unless they are led up its slopes by their favorite high priests of authority, the data will go unacknowledged. Damien Broderick From moulton at moulton.com Sat Jan 23 17:26:40 2010 From: moulton at moulton.com (moulton at moulton.com) Date: 23 Jan 2010 17:26:40 -0000 Subject: [ExI] psi in Nature. Message-ID: <20100123172640.48006.qmail@moulton.com> On Sat, 2010-01-23 at 10:45 -0500, John Clark wrote: On Jan 22, 2010, Damien Broderick wrote: > Anticipated response: "Yeah, right, and what else have they published > there in the last 36 years? BULLSHIT!" > > Thank you Damien, you saved me some time. I attempted to find a copy of the paper by Targ and Puthoff. I do not have a subscription to Nature. I did track down what is supposedly a copy of the paper at: http://66.221.71.68/content/research/sria.htm Uri Geller website. If you scroll to the bottom of the page there are links for the images. I have not studied the paper sufficiently to make a determination about it. I am just passing on the URL. And just because I am passing on the URL do not think that I in any way endorse Uri Geller. I also found that if you go to youtube and do a search on Uri Geller you find several video clips. Fred From gts_2000 at yahoo.com Sat Jan 23 18:20:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 10:20:34 -0800 (PST) Subject: [ExI] The digital nature of brains Message-ID: <28791.51071.qm@web36505.mail.mud.yahoo.com> --- On Sat, 1/23/10, Damien Broderick wrote: > Jesus Christ, how many people on this list can actually > read? You snip out most of what I wrote as if I were > disagreeing with your assertions about simulations When you start a post with the one-word sentence "Wrong", as you did, then I think you should expect the other person to think you have a disagreement. , when > obviously I was drawing attention to the difference between > *functional simulations* and *superficial representations*: > > of the [thermometer]--perhaps its surface, perhaps some of > its innards (for various values of "scan"). An emulation, by > definition, is something that reproduces the effectivity of > the original, not just its superficial appearance. > > > I doubt that my slip in typing "computer" when I meant > "thermometer" has any bearing on your misprision. > > How about addressing my actual point? I would ask you to do to the same, Damien. I responded to Eric's suggestion that actual digital thermostats equal digital simulations of analog thermostats. I don't see them as such for the reasons I gave, and also I don't see digital simulations of analog thermostats as copies or (to your point) as emulations of real thermostats. Unless we add something to the picture in the actual world, digital simulations of analog thermostats can regulate temperature only in simulated environments. If you really don't disagree then I wonder why you did not begin your post with something less confrontational. -gts From gts_2000 at yahoo.com Sat Jan 23 18:48:28 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 10:48:28 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <4B5B353D.6030709@satx.rr.com> Message-ID: <353514.63292.qm@web36508.mail.mud.yahoo.com> --- On Sat, 1/23/10, Damien Broderick wrote: >> The book about the apple describes the apple, and the >> digital simulation acts only as a translation of the book. >> No matter whether the description exists in book form or in >> digital form, it only describes the apple and descriptions >> of things do not equal the things they describe. > > Ah, so you *almost* get it. But you still confuse yourself > by conflating emulation and description. The Apple apple is > just another description; it can't be an emulation. But a > calculator represented on a computer monitor can be both, I would consider an emulation of an apple a copy, and yes we cannot create emulations of apples, I already covered that subject in my first post in this thread. So yes I get it. In this thread I target such religious nonsense as that which I see when people say that digitally simulated people can eat and taste digitally simulated apples. Eric has come forth to defend that idea. Will you defend it too? It seems to me that a digital simulation of a person eating apples exists only as a description of someone eating and tasting apples, not as anyone actually eating and tasting apples, and so I must bracket the terms. Digitally simulated people can eat digitally simulated apples in the same sense that "Gandalf" can "smoke" a "pipe" in "Middle Earth", which is to say they "eat them", but *not really*. -gts From thespike at satx.rr.com Sat Jan 23 18:59:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 12:59:17 -0600 Subject: [ExI] The digital nature of brains In-Reply-To: <28791.51071.qm@web36505.mail.mud.yahoo.com> References: <28791.51071.qm@web36505.mail.mud.yahoo.com> Message-ID: <4B5B4705.80402@satx.rr.com> On 1/23/2010 12:20 PM, Gordon Swobe wrote: >> You snip out most of what I wrote as if I were >> disagreeing with your assertions about simulations > When you start a post with the one-word sentence "Wrong", as you did, then I think you should expect the other person to think you have a disagreement. Perhaps I should have written "Wrongly used word," then--but it's usually more effective to read all of a paragraph in order to understand what is being conveyed, rather than stopping at the first word and ignoring the rest. And you still don't seem to acknowledge that an *emulation* is something that *produces the same result as something it is mimicking*, rather than just bearing a superficial resemblance. Damien Broderick From spike66 at att.net Sat Jan 23 19:33:12 2010 From: spike66 at att.net (spike) Date: Sat, 23 Jan 2010 11:33:12 -0800 Subject: [ExI] electroweak stars Message-ID: <50C4A7D8CE3742FB958EEB793C397A83@spike> In the psi discussion, a good point is that new phyical phenomena are still being discovered in our enlightened age. Here's an exciting development, electroweak stars: http://www.foxnews.com/scitech/2010/01/22/science-adds-new-class-stars-elect roweak/?test=latestnews Science Adds a New Class of Stars: Electroweak By Clara Moskowitz - Space.com NASA/Russell Croman Astrophotography Physicsts believe a new class of stars -- dubbed "electroweak" stars -- are hiding out somewhere in the universe. Scientists have proposed a new class of star, one with an exotic stellar engine that would emit mostly hard-to-detect neutrinos instead of photons of light like regular stars. These objects, dubbed "electroweak stars," are plausible because of the Standard Model of physics - though none have been detected yet - partly because they wouldn't shine very brightly in visible light. A team of physicists led by Glenn Starkman of Ohio's Case Western Reserve University describe the structure of such stars in a paper recently submitted to the journal Physical Review Letters. An electroweak star could come into being toward the end of a massive star's life, after nuclear fusion has stopped in its core, but before the star collapses into a black hole, the researchers found. At this point, the temperature and density inside a star could be so high, subatomic particles called quarks (which are the building blocks of protons and neutrons) could be converted into lighter particles called leptons, which include electrons and neutrinos. "In this process, which we call electroweak burning, huge amounts of energy can be released," the researchers wrote in the scientific paper. Unfortunately for observers, much of that energy would be in the form of neutrinos, which are very light neutral particles that can pass through ordinary matter without interacting, making them very difficult to detect. A small fraction of an electroweak star's output would be in the form of light, though, which is where astronomers could concentrate their efforts to observe them. But, "to understand that small fraction, we have to understand the star better than we do," Starkman said. If electroweak stars do exist, they could last at least 10 million years, the physicists found. "This is long enough to represent a new stage in the evolution of a star if stellar evolution can take it there," the researchers wrote. Nonetheless, such a period of time is still merely a blink of an eye for most stars, which live for billions of years. "Electroweak stars would be an exciting addition to the diverse menagerie of astrophysical bodies that the universe provides," the scientists wrote. "Nevertheless, considerable work remains to be done before we can claim with confidence that such objects will form in the natural process of stellar evolution, or that they will indeed burn steadily for an extended period." Copyright C 2010 Space.com. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Sat Jan 23 20:24:27 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Sat, 23 Jan 2010 15:24:27 -0500 Subject: [ExI] EPOC EEG headset In-Reply-To: <3ffdac781001222334n37b86086ldacd0ea0b742ff1d@mail.gmail.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <3ffdac781001222334n37b86086ldacd0ea0b742ff1d@mail.gmail.com> Message-ID: <8CC6A8132069C4A-7804-1A71B@webmail-d026.sysops.aol.com> Hi Reinhard, suspend your disbelief , it really does what it says on the tin. It reads both facial and eye movement signals from the forehaed, but decodes brainwave activity from its eeg sensors. These signals are then FFT decoded and used to train a nueral net. The end result is that a specific thought pattern is detected and converted into a PC event. atb A -----Original Message----- From: Reinhard H. To: ExI chat list Sent: Sat, 23 Jan 2010 7:34 Subject: Re: [ExI] EPOC EEG headset hi alex, i do not believe that this device really read thought patterns. all the consumer devices i know read only the electric activity of the skull muscles. try the following: take a video of your face during using the device and take a close look at your face musculature. i'm sure (99,9%+) that you unconsciously move your face by trying to move the robot arm. the electric activity of this moving is very much higher than the activity of your brain (through the skull). the device read this activity and not the activity of your brain. best regards reinhard -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Sat Jan 23 21:45:43 2010 From: scerir at libero.it (scerir) Date: Sat, 23 Jan 2010 22:45:43 +0100 (CET) Subject: [ExI] heaves a long broken psi Message-ID: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> Jeff: You know about the Aspect QM work. Decoherence happens. Instantaneously. Super-relativistically. Action at point 'a' in 3D space causes an INSTANTANEOUS effect at arbitrarily distant point 'b'. Others may offer alternative explanations, but to me, the explanation that tops the list -- I forget where I heard it -- admirably robust in its simplicity, is that the two entangled particles are in contact. (Isn't there a body of philosophical work dealing with the logical impossibility of action at a distance, implying the necessity of contact?) # Very close but not .... Two entangled particles are in contact indeed. It is called nonseparability. And, technically, they are called bi-particles. Even that seems wrong, before measurements it would be better to say bi-waves :-), and waves have loong tailss. So, for the bi-waves, there is no action at 'distance', and there is no passion at 'distance'. Possibly, for the bi-waves, there is no even 'distance'. Try to reduce - and not to extend - the number of dimensions. From 4d to 3d to 2d to 1d to 0d. Contact. There is no space, no time :-) From scerir at libero.it Sat Jan 23 22:44:32 2010 From: scerir at libero.it (scerir) Date: Sat, 23 Jan 2010 23:44:32 +0100 (CET) Subject: [ExI] psi in Nature Message-ID: <31811703.158651264286672082.JavaMail.defaultUser@defaultHost> Alex: The level of psi ability would therefor be dependant on the quantity of entagled atoms in each individuals brain. # Is the brain a chaotic object? I mean, is a neural net something chaotic, at least partially? I really do not know anything about that. But. It is known that chaos - at a macroscopic level, at a mesoscopic level - has a quantum signature ('signature', non necessarily 'cause') and this signature is the quantum entanglement of the quantum systems 'immersed' in the chaotic regime. From stathisp at gmail.com Sat Jan 23 23:04:03 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jan 2010 10:04:03 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <575387.49824.qm@web36505.mail.mud.yahoo.com> References: <20100122181804.5.qmail@syzygy.com> <575387.49824.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/24 Gordon Swobe : > --- On Fri, 1/22/10, Eric Messick wrote: > >> Most thermostats installed today are digital simulations of >> analog thermostats.? They manage to get the job done anyway. > > Modern thermostats contain digital circuitry but they do not equal digital simulations of analog thermostats. > > To see this, imagine that you have an instrument for scanning objects to create digital simulations. You scan an analog thermostat and observe the resulting simulation on your computer. You will not see a real digital thermostat appear on your computer screen. Instead you will see a digital simulation of a non-digital object. That simulation will not have the properties of the original; it will not have the capacity to regulate temperature in your room. > > At best that simulated analog thermostat can regulate simulated temperature in a simulated room that you also create on your computer, and then only as a digital simulation of an analog thermostat, not as a digital simulation of a digital thermostat. This is true, but you could hook up the simulation to a thermometer and it would be as good as a digital thermostat or you could just have the simulated thermostat regulate the temperature of a simulated room. You could do the same with a simulated human: connect him to sense organs or create a sufficiently rich virtual environment for him. In either case he should be conscious. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Jan 23 23:16:12 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 15:16:12 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: <4B5B4705.80402@satx.rr.com> Message-ID: <972578.82502.qm@web36508.mail.mud.yahoo.com> --- On Sat, 1/23/10, Damien Broderick wrote: > And you still don't seem to acknowledge that an *emulation* > is something that *produces the same result as something it > is mimicking*, rather than just bearing a superficial > resemblance. Depends on what you mean by "result". On software/hardware systems we can emulate operating systems and software, and we can emulate hardware. However as I think you've already agreed, we cannot emulate such things as apples. We can create simulations of apples and other non-digital objects but we always lose their reality in the process. We end up with only a description of reality. Some people with overly vivid imaginations want to believe that digital descriptions of people can really live inside digital descriptions of reality, as if Gandalf really lives in Middle Earth and really smokes a pipe. -gts From stathisp at gmail.com Sat Jan 23 23:17:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jan 2010 10:17:51 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <580930c21001230919h314a017coe68b9bf06f4cc792@mail.gmail.com> References: <758269.36155.qm@web36505.mail.mud.yahoo.com> <580930c21001230919h314a017coe68b9bf06f4cc792@mail.gmail.com> Message-ID: 2010/1/24 Stefano Vaj : > On 22 January 2010 00:37, Stathis Papaioannou wrote: >> 2010/1/22 Gordon Swobe : >>> Some people seem to deny the existence of consciousness and thus their own experiences of life in what look to me like vain attempts to escape the conclusion that humans might have something computers do not have. I don't have much to say to them. >> >> I agree that these people can't really deny the existence of >> consciousness. They must be meaning something other than what it looks >> like, or being provocative. > > Yes, this may the reason why you two like so much discussing between you... :-) > > I assume that on the contrary most people do not consider > consciousness as something much more special or "noumenically > existing" than, say, sleep. Most people consider consciousness *extremely* important, the single most important thing in their lives. That is why a gun held to someone's head, representing the threat of permanent loss of consciousness, is so effective a motivator. However, most people also tend to think that consciousness is something over and above the capacity for intelligent behaviour, as does Gordon and Searle. This is absurd, but it isn't immediately obvious that it is absurd even to rational/scientific types, which is why this discussion is important. -- Stathis Papaioannou From eric at m056832107.syzygy.com Sat Jan 23 23:34:36 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 23 Jan 2010 23:34:36 -0000 Subject: [ExI] The digital nature of thermostats In-Reply-To: <575387.49824.qm@web36505.mail.mud.yahoo.com> References: <20100122181804.5.qmail@syzygy.com> <575387.49824.qm@web36505.mail.mud.yahoo.com> Message-ID: <20100123233436.5.qmail@syzygy.com> Gordon writes: > >Modern thermostats contain digital circuitry but they do not equal > digital simulations of analog thermostats. > >To see this, imagine that you have an instrument for scanning objects > to create digital simulations. You scan an analog thermostat and > observe the resulting simulation on your computer. You will not see a > real digital thermostat appear on your computer screen. Instead you > will see a digital simulation of a non-digital object. That > simulation will not have the properties of the original; it will not > have the capacity to regulate temperature in your room. I don't understand your assertion that the thermostat hanging on my wall does not simulate an analog one. What does a thermostat do? It commands a furnace to turn on or off based on the difference between the room temperature and a set-point. An analog one does this by means of a tilt switch connected to a bi-metallic strip. We could simulate the behavior of that bi-metallic strip at the atomic level in a giant supercomputer, and connect the simulation up to a silicon temperature sensor and a relay. That simulation would regulate the temperature of a real room. We could use less power by noticing that there was a simple relationship between the input temperature and the state of the relay, and replace the supercomputer with a small embedded processor. We've placed an abstraction boundary around the bi-metallic strip, and simplified it to a compare instruction in the embedded program. That compare instruction is still simulating the behavior of the bi-metallic strip. The box we've built looks just like the one hanging on my wall. It's got screw mounts for wires to control the furnace, just like the analog one. It reacts to changes in the room temperature, just like the analog one. It regulates the temperature of the room, just like the analog one. Inside, it has a CPU running a program which simulates an abstract analog thermostat by using a simple compare instruction. We build a p-neuron the same way. We give it sensors to detect the firings of neurons around it. We give it a CPU with a program to simulate the behavior of a neuron. We give it output devices for it to trigger to signal other neurons. In this case, the simulation is more complicated than a simple compare instruction, because the behavior of a neuron is much more complex than the behavior of a thermostat. Note that we can already do all of these things. Cochlear implants turn sound into neural signals. Nature reported in 2006 on a system which used neural signals to control a prosthetic hand (similar to the EPOC EEG headset being discussed in another thread, but more invasive). http://www.nature.com/nature/journal/v442/n7099/full/nature04970.html Neuronal ensemble control of prosthetic devices by a human with tetraplegia Neuronal ensemble activity recorded through a 96-microelectrode array implanted in primary motor cortex demonstrated that intended hand motion modulates cortical spiking patterns three years after spinal cord injury. Decoders were created, providing a 'neural cursor' with which [the subject] MN opened simulated e-mail and operated devices such as a television, even while conversing. Furthermore, MN used neural control to open and close a prosthetic hand, and perform rudimentary actions with a multi-jointed robotic arm. This paper describes precursor technology to the p-neurons we discussed earlier. This experiment is very similar to the partial replacement experiment we discussed earlier. MN was able to open *real* email and was able to move a *real* robotic arm using his simulated p-neurons. People with cochlear implants hear *real* sounds through their simulated p-neurons. And before you say that we're not actually simulating neurons here, note that in both cases there are complex encoding and decoding processes to generate and interpret the neural signals involved. That encoding and decoding is information processing that would otherwise have been performed by other neurons. The algorithmic programs in the prostheses are replacing and emulating those neural sub-systems. The key point here is that we hook our simulation up to the real world through input/output devices of some sort. A computer with no I/O devices is not very useful (except perhaps to it's inhabitants). Those I/O channels allow our simulation to affect and be affected by real objects. The I/O devices are where symbols are grounded. -eric From stathisp at gmail.com Sat Jan 23 23:36:39 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jan 2010 10:36:39 +1100 Subject: [ExI] Brains, Computers, AI and Uploading In-Reply-To: <580930c21001230951w32d675ecke839a988fad98aa2@mail.gmail.com> References: <580930c21001230951w32d675ecke839a988fad98aa2@mail.gmail.com> Message-ID: 2010/1/24 Stefano Vaj : > I have just happened to stumble on a brief essay which seems to offer the > ultimate philosophical "mise ? point" on the debates having resurfaced on > this list on the implementation of "intelligence", "consciousness" or > specific individuals on platforms different from organic brains. > > There it is: > http://www.fqxi.org/community/forum/topic/essay-download/596/__details/Wolfram_WhatIsUltimatelyPos_1.pdf The essay is on computability of the universe. It doesn't directly address the question of whether computability of brain processes implies computability of consciousness, although from its tone it seems that it does assume this. -- Stathis Papaioannou From eric at m056832107.syzygy.com Sat Jan 23 23:41:41 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 23 Jan 2010 23:41:41 -0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: References: <20100122181804.5.qmail@syzygy.com> <575387.49824.qm@web36505.mail.mud.yahoo.com> Message-ID: <20100123234141.5.qmail@syzygy.com> Stathis writes: >This is true, but you could hook up the simulation to a thermometer >and it would be as good as a digital thermostat or you could just >have the simulated thermostat regulate the temperature of a simulated >room. You could do the same with a simulated human: connect him to >sense organs or create a sufficiently rich virtual environment for >him. In either case he should be conscious. Exactly! I just said the same thing using a lot more words. -eric From stathisp at gmail.com Sat Jan 23 23:50:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jan 2010 10:50:56 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100123234141.5.qmail@syzygy.com> References: <20100122181804.5.qmail@syzygy.com> <575387.49824.qm@web36505.mail.mud.yahoo.com> <20100123234141.5.qmail@syzygy.com> Message-ID: 2010/1/24 Eric Messick : > Stathis writes: >>This is true, but you could hook up the simulation to a thermometer >>and it would be as good as a digital thermostat or you could just >>have the simulated thermostat regulate the temperature of a simulated >>room. ?You could do the same with a simulated human: connect him to >>sense organs or create a sufficiently rich virtual environment for >>him. In either case he should be conscious. > > Exactly! ?I just said the same thing using a lot more words. Yes, sorry for repeating it. I tend to answer emails in the order I receive them. -- Stathis Papaioannou From stefano.vaj at gmail.com Sat Jan 23 23:57:26 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 00:57:26 +0100 Subject: [ExI] electroweak stars In-Reply-To: <50C4A7D8CE3742FB958EEB793C397A83@spike> References: <50C4A7D8CE3742FB958EEB793C397A83@spike> Message-ID: <580930c21001231557w4ab8f79h5f1a4bf58a89100e@mail.gmail.com> 2010/1/23 spike : > "This is long enough to represent a new stage in the evolution of a star if > stellar evolution can take it there," the researchers wrote. Star *evolution*?! Come on, has anybody ever seen a small red star becoming a big blue star? Everybody should know that stars where created exactly as they are during the Genesis... :-D -- Stefano Vaj From gts_2000 at yahoo.com Sun Jan 24 00:01:26 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 16:01:26 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100123234141.5.qmail@syzygy.com> Message-ID: <939853.12789.qm@web36505.mail.mud.yahoo.com> --- On Sat, 1/23/10, Eric Messick wrote: >> Stathis writes: >> This is true, but you could hook up the simulation to a >> thermometer >> and it would be as good as a digital thermostat or you > could just > >have the simulated thermostat regulate the temperature > of a simulated > >room.? You could do the same with a simulated > human: connect him to > >sense organs or create a sufficiently rich virtual > environment for > >him. In either case he should be conscious. > > Exactly!? I just said the same thing using a lot more > words. As I replied to Stathis, the simulated thermostat will not regulate temperature in a real room without adding hardware to the computer that runs the simulation. After you add that hardware you will have a real thermostat. But that new real thermostat will also defy your attempts to ignore reality: like the first it will not regulate temperature in a real room until you add something to the picture here in the real world. Digital simulations of non-digital objects never equal the things they simulate, except that some people here like to imagine so. -gts From stefano.vaj at gmail.com Sun Jan 24 00:06:26 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 01:06:26 +0100 Subject: [ExI] heaves a long broken psi In-Reply-To: <4B5A0DBF.50104@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> Message-ID: <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> On 22 January 2010 21:42, Damien Broderick wrote: > This misuses the word. If ESP is real, there is no reason to suppose that it > functions by abrogating the laws of physics; far more economical to suppose > that we do not yet fully understand all those laws. Moreover, any different concept would be a thinly secularised version of the idea of "divine" laws, since "natural laws" is just a human metaphor of how things actually go, not as they "should" go according to the will of some kind of legislator. Accordingly, a human law can be breached, while remaining valid. A "natural law" cannot be breached, in the sense that it would not be valid any more if any exception thereto exists. Having said that, I am quite persuaded that if you try and guess a billion time a playing card you guess slightly more often than would be statistically warranted (I am much more sanguine on other fancier alleged ESP phenomena). Only, I am much less sure that whatever the reason of that may be, it has anything to do with quantum mechanics. This kind of "explanation" seems to me a far-fetched attempt to explain something of which we do not know much with something else which understand rather poorly. -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 24 00:13:33 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 01:13:33 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <758269.36155.qm@web36505.mail.mud.yahoo.com> <580930c21001230919h314a017coe68b9bf06f4cc792@mail.gmail.com> Message-ID: <580930c21001231613i3440dfecr2de59be8bd0cc496@mail.gmail.com> On 24 January 2010 00:17, Stathis Papaioannou wrote: > Most people consider consciousness *extremely* important, the single > most important thing in their lives. Mmhhh. Whenever this is the case, I suspect it may have something to do with some some thinly secularised concept of "soul". > That is why a gun held to > someone's head, representing the threat of permanent loss of > consciousness, is so effective a motivator. I beg to differ. A gun held to one's head, or any other proximate chance for physical threat, often indicated by pain, simply unleash one's gene whisper to run away as fast as one can. Exactly for the same reasons why a fruitfly does not dive into water or a rat likes sex. Because those who are inclined otherwise would not leave much offspring. -- Stefano Vaj From eric at m056832107.syzygy.com Sun Jan 24 00:15:58 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 24 Jan 2010 00:15:58 -0000 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <353514.63292.qm@web36508.mail.mud.yahoo.com> References: <4B5B353D.6030709@satx.rr.com> <353514.63292.qm@web36508.mail.mud.yahoo.com> Message-ID: <20100124001558.5.qmail@syzygy.com> Gordon writes: >In this thread I target such religious nonsense as that which I see > when people say that digitally simulated people can eat and taste > digitally simulated apples. Eric has come forth to defend that > idea. Will you defend it too? Hey, no need to disparage my remarks with the label "religious"! When you bite into an apple, your sensory neurons excite a particular neural firing pattern in your brain. That pattern represents (among other things) your experience of what it is like to eat an apple. That pattern encodes a set of symbols which are grounded by the signals coming in from your sensory neurons. You probably have no words for most of those symbols, since the experience is much more rich than a verbal description of the experience. Now let's consider a simulated person biting into a simulated apple. At some level, the apple simulation communicates to the mouth simulation some information about taste and texture. That information gets translated into simulated sensory neural signals. The brain simulation generates a neural firing pattern based partly on the sensory signals. The resulting pattern encodes some of the same symbols that were activated in your real brain when you really bit into an apple. Our simulated person then decides to describe the experience in a simulated email, which he sends to a real friend. When the friend reads the (now real) email, she decides that her friend has enjoyed an apple. If no one has actually enjoyed an apple, who wrote the email? Or, do you contend that no such email could possibly be produced? >Digitally simulated people can eat digitally simulated apples in the > same sense that "Gandalf" can "smoke" a "pipe" in "Middle Earth", > which is to say they "eat them", but *not really*. So you keep asserting... -eric From stefano.vaj at gmail.com Sun Jan 24 00:18:13 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 01:18:13 +0100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100122181804.5.qmail@syzygy.com> References: <64697.50664.qm@web36501.mail.mud.yahoo.com> <20100122181804.5.qmail@syzygy.com> Message-ID: <580930c21001231618o7c08b6a1yd5b8372f1b859e34@mail.gmail.com> On 22 January 2010 19:18, Eric Messick wrote: > Gordon writes: > The digital nature of > neurons is one of the things which makes them most useful as > information processors. In any event, I believe it has been demonstrated that nothing that an analog computer can achieve is really irreproducible by a digital computer... -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 24 00:20:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 01:20:41 +0100 Subject: [ExI] Brains, Computers, AI and Uploading In-Reply-To: References: <580930c21001230951w32d675ecke839a988fad98aa2@mail.gmail.com> Message-ID: <580930c21001231620i3b2491d5geb42fb8d744d949c@mail.gmail.com> On 24 January 2010 00:36, Stathis Papaioannou wrote: > The essay is on computability of the universe. It doesn't directly > address the question of whether computability of brain processes > implies computability of consciousness, although from its tone it > seems that it does assume this. Certainly it does. In fact, in some sense it implies that the boundaries of what can exist are defined by the boundaries of what can be computed... -- Stefano Vaj From thespike at satx.rr.com Sun Jan 24 00:29:20 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 18:29:20 -0600 Subject: [ExI] heaves a long broken psi In-Reply-To: <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> Message-ID: <4B5B9460.7060005@satx.rr.com> On 1/23/2010 6:06 PM, Stefano Vaj wrote: > I am quite persuaded that if you try and guess a > billion time a playing card you guess slightly more often than would > be statistically warranted Experience indicates that this isn't true, actually, because boredom crushes whatever is invoked by psi experiments, as it does with imagination, sex, and many other human responses. You do better getting a million people to make 1000 guesses each, or 10 million to make100. > I am much less sure that whatever the > reason of that may be, it has anything to do with quantum mechanics. > This kind of "explanation" seems to me a far-fetched attempt to > explain something of which we do not know much with something else > which understand rather poorly. This is often said dismissively, but I disagree. People turn to QT because it is the best theory available for how the real world works, and it sometimes turns out that interpretations of QT appear to be consilient with at least some psi anomalies, far more so than with the predictions of classic physics models. For example, entanglement does act in ways that utterly defy traditional notions of separability. It is not all clear that this can be used to account for precognition or telepathy, given the fairly well-established result that messages can't be sent via entanglement because a particle can't be forced into a desired state without destroying the entanglement--but there could be some aspect science has so far overlooked. Or perhaps, like the rolled-up dimensions and dark matter, it's something that hardly anyone ever thought about until quite recently--perhaps (perish the thought!) some deep insight that yet remains to be discovered. Meanwhile, the empirical results of, say, double-blinded precognitive remote viewing (by professional researchers, not bogus idiots and scammers blathering on Coast to Coast) remain to tease the theorists, if they can bothered looking at these results. Damien Broderick From stathisp at gmail.com Sun Jan 24 00:37:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jan 2010 11:37:47 +1100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: <580930c21001231613i3440dfecr2de59be8bd0cc496@mail.gmail.com> References: <758269.36155.qm@web36505.mail.mud.yahoo.com> <580930c21001230919h314a017coe68b9bf06f4cc792@mail.gmail.com> <580930c21001231613i3440dfecr2de59be8bd0cc496@mail.gmail.com> Message-ID: 2010/1/24 Stefano Vaj : >> That is why a gun held to >> someone's head, representing the threat of permanent loss of >> consciousness, is so effective a motivator. > > I beg to differ. A gun held to one's head, or any other proximate > chance for physical threat, often indicated by pain, simply unleash > one's gene whisper to run away as fast as one can. Exactly for the > same reasons why a fruitfly does not dive into water or a rat likes > sex. Because those who are inclined otherwise would not leave much > offspring. That's the reason, certainly. There is nothing fundamentally wrong with being dead. Still, even though I have this insight, I don't want to be free of the manipulation; and this of course is part of the manipulation. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 24 00:43:16 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jan 2010 11:43:16 +1100 Subject: [ExI] Brains, Computers, AI and Uploading In-Reply-To: <580930c21001231620i3b2491d5geb42fb8d744d949c@mail.gmail.com> References: <580930c21001230951w32d675ecke839a988fad98aa2@mail.gmail.com> <580930c21001231620i3b2491d5geb42fb8d744d949c@mail.gmail.com> Message-ID: 2010/1/24 Stefano Vaj : > On 24 January 2010 00:36, Stathis Papaioannou wrote: >> The essay is on computability of the universe. It doesn't directly >> address the question of whether computability of brain processes >> implies computability of consciousness, although from its tone it >> seems that it does assume this. > > Certainly it does. In fact, in some sense it implies that the > boundaries of what can exist are defined by the boundaries of what can > be computed... Yes, I agree with this, but people like Searle assert that everything apart from consciousness is computable, and this needs specific rebuttal. -- Stathis Papaioannou From gts_2000 at yahoo.com Sun Jan 24 01:08:54 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 17:08:54 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) Message-ID: <85232.43755.qm@web36505.mail.mud.yahoo.com> --- On Sat, 1/23/10, Stathis Papaioannou wrote: >> At best that simulated analog thermostat can regulate > simulated temperature in a simulated room that you also > create on your computer, and then only as a digital > simulation of an analog thermostat, not as a digital > simulation of a digital thermostat. > > This is true, but you could hook up the simulation to a > thermometer and it would be as good as a digital thermostat In that case you have created a real thermostat and my same argument applies: a digital simulation of it will not regulate temperature in a real room. > or you could just have the simulated thermostat regulate the > temperature of a simulated room. In that case you still have only a simulated thermostat, not a real one. -gts From stathisp at gmail.com Sun Jan 24 01:11:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jan 2010 12:11:19 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <939853.12789.qm@web36505.mail.mud.yahoo.com> References: <20100123234141.5.qmail@syzygy.com> <939853.12789.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/24 Gordon Swobe : > As I replied to Stathis, the simulated thermostat will not regulate temperature in a real room without adding hardware to the computer that runs the simulation. After you add that hardware you will have a real thermostat. But that new real thermostat will also defy your attempts to ignore reality: like the first it will not regulate temperature in a real room until you add something to the picture here in the real world. > > Digital simulations of non-digital objects never equal the things they simulate, except that some people here like to imagine so. It is true that a digital simulation is not the same as the original, but the question is whether it performs the same function as the original. A simulated apple could taste, feel, smell like a real apple to a person with a lot of extra equipment which I'm sure computer game developers are working on; a simulated clock, on the other hand, can tell time the same as a real clock without additional equipment. It depends on what function of the original you are interested in. A simulated brain will not be identical to a real brain but you seem to agree that it could display the same behaviour as a real brain if we added appropriate sensory organs and effectors. However, you make the claim that although every other function of the brain could be reproduced by the simulated brain the consciousness can never be reproduced. But if that were so, it would allow for the possibility that you are a zombie and don't realise it, which you agree is absurd. Therefore, you MUST agree that it is impossible to reproduce all the functions of the brain without also reproducing consciousness. It is still open to you to claim that a computer could never reproduce human intelligence (and therefore never reproduce human consciousness); although there is no good reason to believe that this is the case at least it is not self-contradictory. However, you seem remarkably unwilling to do this even though it is the obvious way out. -- Stathis Papaioannou From avantguardian2020 at yahoo.com Sun Jan 24 01:07:56 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 23 Jan 2010 17:07:56 -0800 (PST) Subject: [ExI] quantum brains Message-ID: <761294.98627.qm@web65601.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: ExI chat list > Sent: Wed, January 20, 2010 12:35:50 AM > Subject: Re: [ExI] quantum brains > Penrose thinks there is an as yet undiscovered theory of quantum > gravity which is uncomputable and which is an essential part of brain > function. I don't care what Penrose thinks except insofar as I agree that QM may play a role in brain function. Of course I have been of that opinion long before I heard of Penrose's theory, but that is not important. His reasoning for bringing quantum gravity into the picture eludes me?except perhaps?as a means to place his theory beyond the realm of experimental falsification.?And of course?the microtubules are a bizarre non-sequitor brought in by Hammeroff because like?most molecular biologists, he thinks the?protein or gene?he is specialized in is the most important molecule in the world. So let's throw out the gravity, since astronauts in zero G don't get stupid or lose consciousness.?And let's assume that microtubules are a structural component of the cytoskeleton and leave?it at that. After all neurons are not particularly mobile for cells and if microtubules were the?mechanism of consciousness, peoples would think with their muscles instead of their brains because myocytes have more microtubules than neurons. And let's put off any conclusions regarding the computability or?uncomputability of the brain until we have better defined the problem space and have more data.?Let's just entertain the naked hypothesis that brains may utilize?QM for some part of their function and see where it leads. ? > We haven't been able to make self-repairing, self-replicating > machines, and nature has been doing it for billions of years. It's > possible that we will be able to upload minds before we can make > artificial organisms. But that doesn't mean that vitalism is correct. Nature?hasn't been doing it for all those billions of years.?Spontaneous generation of de novo life?was experimentally falsified by Pasteur in the 19th century at least at the current conditions on planet earth.?Nature?need only have done it once in those billions of years and then those self-replicating machines did the rest. I have?some?unusual hypotheses about the origins of life too that I won't go into now. But I certainly don't see what vitalism has to do with this discussion.? ? > > That being said, there are a lot of parallels between how people and quantum > particles behave. For one thing, they both behave probabilistically. One cannot > predict a persons actions in response to a stimulus to the degree that one can > predict say a falling brick, the oxidation of iron, or other straightfoward > physical process. The best one can do is assign probabilities based on the > previous history and the statistical analysis of large ensembles of similar > people. While economists try to constrain predicted behavior by rationality, > people, even rational people,?can and do act irrationally?under certain > conditions. > > You could make the same analogy between quantum particles and any > classical chaotic or truly random system. Perhaps. But there are other parallels that don't apply to classical chaotic systems. Things like the?mind:body -> quantum wavefunction:quantum information dualites?that Serafino mentioned in his post to which one could add the more cliche wave:particle duality. ? Then there is the way that excitory post-synaptic potentials (EPSP) and inhibitory post-synaptic potentials (IPSP) can sum over time and space to trigger?neuronal depolarization?that is very reminiscent of constructive and destructive interference. Although by my cursory swim through?the literature,?reports are?all over the board with the quantitative measurements of these things, the?mean voltages of these signals are?approximately?5 millivolts and the mean current is about?25 picoamps.?The average duration of these signals are about 20 milliseconds. Multiply all these rough figures together and you get approximately 2.5 femtojoules or about 15,600 eV which is?admittedly?too much energy for quantum effects. ? However there are EPSP that fire?spontaneously at a much smaller voltage called miniature EPSPs or mEPSPs that are in range of about 400 microvolts, 10 picoamps, and last about 1 millisecond. These are thought to be caused by single vessicles of neurotransmitters being randomly being released into the synaptic cleft. These things have an energy of about 25 eV which for comparison is not much higher than the ground state of the hydrogen atom at -13.6 eV. Moreover these things happen quite frequently. ? http://jp.physoc.org/content/494/Pt_1/171.full.pdf+html ? Now it seems to me that these mEPSPs in the brain are very similar to the quantum fluctuations in normal matter. Like the fluctuations in a nebula of hydrogen gas that could trigger the condensation of the gas into a protostar. Since EPSP are additive, one or more of these things could push a subthreshold normal EPSP over the threshold causing a recipient neuron to?depolarize and initiate an action potential. So?this is?a potential mechanism for?leaps of intuition, hunches, imagination, and creativity. And it seems a much more testable hypothesis than quantum microgravity yanking on microtubules ala Penrose and Hammeroff. Now the interesting question is are these things capable of the more bizzare quantum behavior like uncertainty, entanglement, and wave-particle duality? e.g. could mEPSPs interfere with themselves? Or might they exhibit particle properties and be called "psions"? ? Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From gts_2000 at yahoo.com Sun Jan 24 01:21:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 23 Jan 2010 17:21:00 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <20100124001558.5.qmail@syzygy.com> Message-ID: <899032.83390.qm@web36504.mail.mud.yahoo.com> --- On Sat, 1/23/10, Eric Messick wrote: > Hey, no need to disparage my remarks with the label "religious"! Sure looks like religion to me! > Now let's consider a simulated person biting into a > simulated apple. At some level, the apple simulation communicates to the > mouth simulation some information about taste and texture.? > That information gets translated into simulated sensory neural > signals.? The brain simulation generates a neural firing pattern based > partly on the sensory signals.? The resulting pattern encodes some > of the same symbols that were activated in your real brain when you > really bit into an apple. That "neural firing pattern" amounts to mindless software running on some computer. > Our simulated person then decides to describe the > experience in a simulated email, which he sends to a real friend.? > When the friend reads the (now real) email, she decides that her friend > has enjoyed an apple. > > If no one has actually enjoyed an apple, who wrote the > email? A program wrote it, one like Eliza (if you remember her) but perhaps smart enough to fool you. > Or, do you contend that no such email could possibly be produced? Not at all. My spam filter blocks hundreds of fake emails every day. -gts From thespike at satx.rr.com Sun Jan 24 01:59:47 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 19:59:47 -0600 Subject: [ExI] quantum brains In-Reply-To: <761294.98627.qm@web65601.mail.ac4.yahoo.com> References: <761294.98627.qm@web65601.mail.ac4.yahoo.com> Message-ID: <4B5BA993.5010907@satx.rr.com> On 1/23/2010 7:07 PM, The Avantguardian wrote: > Then there is the way that excitory post-synaptic potentials (EPSP) and inhibitory post-synaptic potentials (IPSP) can sum over time and space to trigger neuronal depolarization that is very reminiscent of constructive and destructive interference. Although by my cursory swim through the literature, reports are all over the board with the quantitative measurements of these things, the mean voltages of these signals are approximately 5 millivolts and the mean current is about 25 picoamps. The average duration of these signals are about 20 milliseconds. Multiply all these rough figures together and you get approximately 2.5 femtojoules or about 15,600 eV which is admittedly too much energy for quantum effects. > > However there are EPSP that fire spontaneously at a much smaller voltage called miniature EPSPs or mEPSPs that are in range of about 400 microvolts, 10 picoamps, and last about 1 millisecond. These are thought to be caused by single vessicles of neurotransmitters being randomly being released into the synaptic cleft. These things have an energy of about 25 eV which for comparison is not much higher than the ground state of the hydrogen atom at -13.6 eV. Moreover these things happen quite frequently. > > http://jp.physoc.org/content/494/Pt_1/171.full.pdf+html > > Now it seems to me that these mEPSPs in the brain are very similar to the quantum fluctuations in normal matter. Like the fluctuations in a nebula of hydrogen gas that could trigger the condensation of the gas into a protostar. Since EPSP are additive, one or more of these things could push a subthreshold normal EPSP over the threshold causing a recipient neuron to depolarize and initiate an action potential. So this is a potential mechanism for leaps of intuition, hunches, imagination, and creativity. And it seems a much more testable hypothesis than quantum microgravity yanking on microtubules ala Penrose and Hammeroff. Now the interesting question is are these things capable of the more bizzare quantum behavior like uncertainty, entanglement, and wave-particle duality? e.g. could mEPSPs interfere with themselves? Or might they exhibit particle properties and be called "psions"? Are you familiar with the analyses by physicist Evan Harris Walker, somewhat along these lines? (I happen to think he was full of shit, and said so to his e-face, but I might be wrong.) And of course Eccles and Popper, back in the day. Damien Broderick From jrd1415 at gmail.com Sun Jan 24 02:05:40 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 23 Jan 2010 19:05:40 -0700 Subject: [ExI] heaves a long broken psi In-Reply-To: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> Message-ID: On 1/23/10, scerir wrote: > # Very close but not .... Two entangled particles are in contact indeed. It > is > called nonseparability. And, technically, they are called bi-particles. > Even > that seems wrong, before measurements it would be better to say bi-waves > :-), > and waves have loong tailss. So, for the bi-waves, there is no action at > 'distance', and there is no passion at 'distance'. Possibly, for the > bi-waves, > there is no even 'distance'. Try to reduce - and not to extend - the number > of > dimensions. From 4d to 3d to 2d to 1d to 0d. Contact. There is no space, no > time :-) Scerir, What a pleasure this is. I've long admired the quality of your physics savvy. And I look forward to responding to your comment. Problem is: how to disentangle the whimsy indicated by the two smiley's, from the serious. Anyway, here goes... First, I loved the "there is no passion at [a] 'distance'." I'm not at all sure what that means, but I love it, and I'm gonna let it simmer at length in my subconscious, cause "I need the eggs". Whenever a techno-weenie, er,... technophile summons passion into the techno-whoopie, well, I mean,...science fired by passion? Yeah, baby, I'm for that. (It's the Italian thing, isn't it? The 'passion' connection....) Okay, so now let me take a few deep breaths and calm down. "Self-replicating machine systems, ...self-replicating machine systems,...self-replicating machine systems,...etc.")(It's my mantra. Takes me to my 'happy place'...) Ok. I'm all calm now. You wrote: >...So, for the bi-waves, there is no action at > 'distance'... " Uh,... well, yeah. You start with, "...very close but no." But I don't see where we disagree, cause that's exactly what I was saying. There is no action at a distance. I must have communicated poorly. Regarding wave-particle duality, ok, I've heard of that. As I understand it, all the sparkling bits in our universe can be described using two equally valid formulations, the classical which gives us particles, and the quantum which gives us waves. And it was my impression there is this idea that until observed/measured, any given bit is somehow "indeterminate", existing as both particle and wave, and in all acceptable configurations at once, but only in some probabilistic potentiality. Part and parcel of quantum weirdness. Past my hat size. Makes my head hurt. But here's the deal. If you send two entangled photons off in different directions -- the Aspect experiment? -- the experimental apparatus already allows you to treat them as particles. How else can you send them off in different directions? So the bi-wave is already (or is it 'still'?) also a bi-particle. And isn't this, in fact, generally the case? Can't I take any photon/photon wave (or massy baryonic or leptonic particle) at any time and describe it with equal validity using either the particle or wave protocol? That is, can't I put it in a box and describe it as a wave, open the box, look in, and describe it as a particle, put my hands over my eyes, treat the room as just a big box, and keeping my eyes closed, describe it as a wave again. Isn't the wave-particle duality a 24/7 thing? Always on, so to speak, ready at every instant for a "reset" to simultaneous wave particle potentiality? But I digress... So you have these two photonic bits, produced by an experimental apparatus that allows us to assert with some confidence certain particle-associated parameters -- ie position and velocity in 3D space -- (which it seems (to me) we could cover our eyes and reformulate into wave-associated parameters, cause physics takes no time outs), and we say "That one's over there", pointing off in the distance, "and the other one is over here", pointing to a photon trap on the laboratory bench. And now you are prepared to go to the photon trap and slap the captured photon around a bit until it (and it's 'distant' accomplice) decohere/disentangle. Alternatively -- in recognition of the semantic challenges which confront us -- I could say, "Disturb the "photon trap" until the single quantum waveform which constitutes the "not-two" "not particles"collapses and spits out the result: two particles localized in 3D space, or two new, no-longer-entangled quantum waveforms INSTANTANEOUSLY extending to the furthest reaches of 3D space, and whose waveforms might very well be mathematically superposed and seen yet again as a single waveform -- a component in the greater single waveform which is the entire universe. Damn!, there I go digressing again. Okay, okay. (Takes a deep breath and gets back to it.) So you wrote: > ...Possibly, for the bi-waves, there is no even 'distance' Yes, yes, that was precisely my point, but you threw me off with that "...very close, but no." In our 4D realm everything that manifests bears the features of 4D's. Can't escape that IN THIS REALM. In fact, I'm inclined to posit that particle-ness can't exist without spatial and temporal separation. Classical physics is 4D spacetimey-ness. So your bi-waves can't exist in this realm without bearing the marks of and conforming to the rules of 4D spacetime. Those marks are two distinct photons at a known measurable distance, and with no possibility of intimate instantaneous contact (I assert). Ergo, in our 4D spacetime, instantaneous action across distance must be be mediated by and in other dimensions. I avoided using the term "in contact" above, because "in contact" is 'spatial'. It implies 4D. Whereas, in other dimensions, what can be said of space and time? Perhaps nothing. We have space and time in our 4D's, and are much pleased, but that shouldn't prejudice us. No reason to project what we know onto what we don't. And hey, they're OTHER dimensions, they should be different, not some boring rehash of what's already been done. (Okay, that's not really a 'logical' argument, more like an aesthetic one.) So if you want to assert that the bi-waves know no separation, know no 'distance', I'm ready to entertain that notion so long as it is restricted to some extra-dimensional context where the "rules" of existence may not need, may in fact exclude what we experience as space and time. Okay, I've embarrassed myself enough for one day. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From avantguardian2020 at yahoo.com Sun Jan 24 02:35:57 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 23 Jan 2010 18:35:57 -0800 (PST) Subject: [ExI] quantum brains In-Reply-To: <4B5BA993.5010907@satx.rr.com> References: <761294.98627.qm@web65601.mail.ac4.yahoo.com> <4B5BA993.5010907@satx.rr.com> Message-ID: <696694.9108.qm@web65613.mail.ac4.yahoo.com> ----- Original Message ---- > From: Damien Broderick > To: ExI chat list > Sent: Sat, January 23, 2010 5:59:47 PM > Subject: Re: [ExI] quantum brains > Are you familiar with the analyses by physicist Evan Harris Walker, > somewhat along these lines? (I happen to think he was full of shit, and > said so to his e-face, but I might be wrong.) And of course Eccles and > Popper, back in the day. No, Damien. I am Googling them but short of references to his book which I have not read, I am not getting too many details. Could you elaborate on his analysis and what you disagreed with? Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From thespike at satx.rr.com Sun Jan 24 03:08:28 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jan 2010 21:08:28 -0600 Subject: [ExI] quantum brains In-Reply-To: <696694.9108.qm@web65613.mail.ac4.yahoo.com> References: <761294.98627.qm@web65601.mail.ac4.yahoo.com> <4B5BA993.5010907@satx.rr.com> <696694.9108.qm@web65613.mail.ac4.yahoo.com> Message-ID: <4B5BB9AC.7060004@satx.rr.com> On 1/23/2010 8:35 PM, The Avantguardian wrote: >> > Are you familiar with the analyses by physicist Evan Harris Walker, >> > somewhat along these lines? (I happen to think he was full of shit, and >> > said so to his e-face, but I might be wrong.) And of course Eccles and >> > Popper, back in the day. > > I am Googling them but short of references to his book which I have not read, The book's the simplest source, but there used to be papers on his website. He died several years ago so maybe the site is dead too. >Could you elaborate on his analysis and what you disagreed with? There was a lot of email on a list, but it all went away with the magic smoke several computers ago. I found his analysis madly reductive and overly concrete, as well as handwavy with numbers. But then he was a nuclear weapons physicist and I'm an sf writer so who am I to say? He calculated bit rates for volition and consciousness, and had a model of mind tweaking electrons at synaptic junctions, as did Nobelist neuroscientist Eccles. Damien Broderick From avantguardian2020 at yahoo.com Sun Jan 24 04:06:29 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 23 Jan 2010 20:06:29 -0800 (PST) Subject: [ExI] heaves a long broken psi In-Reply-To: References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> Message-ID: <589142.44940.qm@web65611.mail.ac4.yahoo.com> ----- Original Message ---- > From: Jeff Davis > To: scerir ; ExI chat list > Sent: Sat, January 23, 2010 6:05:40 PM > Subject: Re: [ExI] heaves a long broken psi ? > So your bi-waves can't exist in this realm without bearing the marks > of and conforming to the rules of 4D spacetime.? Those marks are two > distinct photons at a known measurable distance, and with no > possibility of intimate instantaneous contact (I assert).? Ergo, in > our 4D spacetime,? instantaneous action across distance must be be > mediated by and in other dimensions. [snip] > So if you want to assert that the bi-waves know no separation, know no > 'distance', I'm ready to entertain that notion so long as it is > restricted to some extra-dimensional context where the "rules" of > existence may not need, may in fact exclude what we experience as > space and time. FWIW, when you are talking about photons even at the other end of the physics i.e. special relativity, they don't experince the same 4D space-time that we do. Simililar to Serafino's observation, the dimesionality of the universe is not increased but is instead reduced. The Lorentz length contraction equation L'=L*sqrt(1-(v^2/c^2)) implies that from an observer in the same reference frame as the photons, the universe is squashed into a 2D plane of zero thickness perpendicular to the path that the photon is travelling. And by time dilation, T'=T/sqrt(1-(v^2/c^2)) rearranged to T=T' *sqrt(1-(v^2/c^2)), the time interval between *any* two events happening to the photon in own frame of reference is likewise zero. Thus in their own frames of reference, photons from a source never leave the source, and their source and their destination is one and the same. They don't move, because they have no space or time with which to move within. They are always in contact in the same?point in 2D space?at the same instant in time even if from the reference frame of slow moving matter their source and destination is separated by millions of light-years. Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From eric at m056832107.syzygy.com Sun Jan 24 05:52:27 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 24 Jan 2010 05:52:27 -0000 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <899032.83390.qm@web36504.mail.mud.yahoo.com> References: <20100124001558.5.qmail@syzygy.com> <899032.83390.qm@web36504.mail.mud.yahoo.com> Message-ID: <20100124055227.5.qmail@syzygy.com> Gordon writes: >--- On Sat, 1/23/10, Eric Messick wrote: >> Hey, no need to disparage my remarks with the label "religious"! > >Sure looks like religion to me! Care to explain how? I'm not seeing any deity. I'm not seeing any ritual, any superstition, any of the usual things associated with religion. All I'm seeing is a different axiomatic choice. Would you say that Euclidean versus non-Euclidean geometry is a religious issue? >That "neural firing pattern" amounts to mindless software running on > some computer. Yes, it's software running on a computer. The whole question here is whether or not there is a mind, so assuming there isn't one isn't going to help answer that question. Non-Euclidean geometry is actually pretty useful and interesting, so assuming that parallel lines intersect yields some interesting results. You've got this axiom that minds cannot be implemented in software. Does assuming that actually lead to any interesting or useful results? Actually, as I've pointed out before, I'm not quite sure what your assumption is. You've never managed to define your terms clearly enough to have actually made a coherent statement of your axiom. Is it: Syntax can never produce semantics. or: Software can never be part of a mind. or: Mind can never be simulated. or: Consciousness is not a computational process. ? All of these statements are similar. You're asserting an absolute disjunction between the sets: (software, syntax, simulation, computation) and (semantics, mind, consciousness, meaning, understanding). While the first set is relatively well defined, the second set is quite slippery. You've got this idea that the second set is this really special thing, but all you can say about it is that it can't be built out of any of those other things. >> If no one has actually enjoyed an apple, who wrote the >> email? > >A program wrote it, one like Eliza (if you remember her) but perhaps > smart enough to fool you. A program was running, but nothing even remotely like that email was designed into it (unlike Eliza). The production of that email was an emergent behavior of 100 billion copies of the same relatively simple program all interacting with each other. Why are you so eager to put limits on what 100 billion simple entities can do when each is connected to thousands of others? Stomping your feet and insisting that it just *can't* be that way doesn't keep a system with a quadrillion connections from spontaneously creating a behavior that you can't understand. -eric From msd001 at gmail.com Sun Jan 24 06:36:23 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 24 Jan 2010 01:36:23 -0500 Subject: [ExI] heaves a long broken psi In-Reply-To: <589142.44940.qm@web65611.mail.ac4.yahoo.com> References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> <589142.44940.qm@web65611.mail.ac4.yahoo.com> Message-ID: <62c14241001232236h349cccf3ga5c7242de78421e8@mail.gmail.com> Just wanted to point out the interesting juxtaposition of targeted ads that accompany this thread in gmail: Relativity Challenge : Did Einstein make a math mistake? You be the judge! Join the experiment : Working Expairement In Entanglement Nothing to loose everything to gain Learn Telepathy NOW : Teach yourself telepathy for free. Notes from the flow music method. Biotech Nanoparticles : Uniform Nanobeads Nanospheres Fluorescent Magnetic Coated OEM Long Distance Moving : 100% Free Moving Estimates From Licensed Long Distance Movers. EquiSync's Binaural Beats : Deep meditative states every time with our binaural beat technology. WAVE Official Site : Water, Air & Energy Technology For Your Family's Well Being. More about... Science Experiment ? Time Travel ? ESP Telepathy ? Free Psychic Readings ? ... and now the thread and the ads are entangled too! (i wonder if the "Expairement" is a typo, or some trick of SEO) From stathisp at gmail.com Sun Jan 24 09:12:21 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jan 2010 20:12:21 +1100 Subject: [ExI] quantum brains In-Reply-To: <761294.98627.qm@web65601.mail.ac4.yahoo.com> References: <761294.98627.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/1/24 The Avantguardian : >And of course?the microtubules are a bizarre non-sequitor brought in by Hammeroff because like?most molecular biologists, he thinks the?protein or gene?he is specialized in is the most important molecule in the world. So let's throw out the gravity, since astronauts in zero G don't get stupid or lose consciousness.?And let's assume that microtubules are a structural component of the cytoskeleton and leave?it at that. After all neurons are not particularly mobile for cells and if microtubules were the?mechanism of consciousness, peoples would think with their muscles instead of > their brains because myocytes have more microtubules than neurons. And let's put off any conclusions regarding the computability or?uncomputability of the brain until we have better defined the problem space and have more data.?Let's just entertain the naked hypothesis that brains may utilize?QM for some part of their function and see where it leads. Yes, but I am specifically interested in the question of brain computability, whatever the underlying mechanism. If brains are not computable then that has profound implications for philosophy of mind; if brains use quantum level events but are still computable then that is scientifically interesting but it doesn't make much difference philosophically. > Perhaps. But there are other parallels that don't apply to classical chaotic systems. Things like the?mind:body -> quantum wavefunction:quantum information dualites?that Serafino mentioned in his post to which one could add the more cliche wave:particle duality. > > Then there is the way that excitory post-synaptic potentials (EPSP) and inhibitory post-synaptic potentials (IPSP) can sum over time and space to trigger?neuronal depolarization?that is very reminiscent of constructive and destructive interference. Although by my cursory swim through?the literature,?reports are?all over the board with the quantitative measurements of these things, the?mean voltages of these signals are?approximately?5 millivolts and the mean current is about?25 picoamps.?The average duration of these signals are about 20 milliseconds. Multiply all these rough figures together and you get approximately 2.5 femtojoules or about 15,600 eV which is?admittedly?too much energy for quantum effects. > > However there are EPSP that fire?spontaneously at a much smaller voltage called miniature EPSPs or mEPSPs that are in range of about 400 microvolts, 10 picoamps, and last about 1 millisecond. These are thought to be caused by single vessicles of neurotransmitters being randomly being released into the synaptic cleft. These things have an energy of about 25 eV which for comparison is not much higher than the ground state of the hydrogen atom at -13.6 eV. Moreover these things happen quite frequently. > > http://jp.physoc.org/content/494/Pt_1/171.full.pdf+html > > Now it seems to me that these mEPSPs in the brain are very similar to the quantum fluctuations in normal matter. Like the fluctuations in a nebula of hydrogen gas that could trigger the condensation of the gas into a protostar. Since EPSP are additive, one or more of these things could push a subthreshold normal EPSP over the threshold causing a recipient neuron to?depolarize and initiate an action potential. So?this is?a potential mechanism for?leaps of intuition, hunches, imagination, and creativity. And it seems a much more testable hypothesis than quantum microgravity yanking on microtubules ala Penrose and Hammeroff. Now the interesting question is are these things capable of the more bizzare quantum behavior like uncertainty, entanglement, and wave-particle duality? e.g. could mEPSPs interfere with themselves? Or might they exhibit particle properties and be called "psions"? It seems quite possible that quantum fluctuations might be amplified by the brain so that they have macroscopic effects on behaviour, but I don't see how this would do anything other than function as a random number generator. Essentially the neuron is just a black box which will fire or not fire depending on the inputs it receives and on its internal state. If it contains an RNG that would affect its propensity to fire, but modelling that should be trivial compared to the difficulty of modelling the complex interconnections between neurons and the deterministic component of each neuron's internal state: if you get that right, then the neuron model will function like a real neuron and the brain will function like a real brain, including consciousness. Penrose thinks it is impossible to model these things because neurons do something essentially non-algorithmic. Ordinary quantum mechanics doesn't cut it here since it is computable with the exception of its randomness (unless you cheat and use a branching world algorithm), which can be easily approximated with a pseudorandom number generator. So Penrose postulates that there is a non-algorithmic theory of quantum gravity which allows neurons to act as hypercomputers. This is how far he is forced to go in order to maintain his view that computers could never have true intelligence or consciousness! -- Stathis Papaioannou From eschatoon at gmail.com Sun Jan 24 09:37:19 2010 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Sun, 24 Jan 2010 10:37:19 +0100 Subject: [ExI] quantum brains In-Reply-To: <32621423.1180761263852897121.JavaMail.defaultUser@defaultHost> References: <32621423.1180761263852897121.JavaMail.defaultUser@defaultHost> Message-ID: <1fa8c3b91001240137n4cba88d3jea12dfacae1b9eba@mail.gmail.com> This is certainly a possibility, but I would not rule out the possibility that classical (non-quantum) systems can still generate really weird shit. Think of Wolfram's work. On Mon, Jan 18, 2010 at 11:14 PM, scerir wrote: > And of course since I'm persuaded that some psi phenomena are real, *something* > weird as shit is needed to account for them, something that can either do > stupendous simulations in multiple worlds/superposed states, or can modify its > state according to outcomes in the future. If that's not QM, it's something > equally hair-raising that electronic > computers aren't built to do. -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From pharos at gmail.com Sun Jan 24 10:21:56 2010 From: pharos at gmail.com (BillK) Date: Sun, 24 Jan 2010 10:21:56 +0000 Subject: [ExI] heaves a long broken psi In-Reply-To: <62c14241001232236h349cccf3ga5c7242de78421e8@mail.gmail.com> References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> <589142.44940.qm@web65611.mail.ac4.yahoo.com> <62c14241001232236h349cccf3ga5c7242de78421e8@mail.gmail.com> Message-ID: On 1/24/10, Mike Dougherty wrote: > Just wanted to point out the interesting juxtaposition of targeted ads > that accompany this thread in gmail: > > Gmail has ads!!! I never knew. I've used gmail for over 5 years and never seen any of their ads. Either Adblock Plus or CustomizeGoogle (Firefox add-ons) will make them disappear, and I use both. I suspected there was something missing in that big blank space at the right hand side - now I know. :) BillK From stefano.vaj at gmail.com Sun Jan 24 13:32:45 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 14:32:45 +0100 Subject: [ExI] Coherent vs. Incoherent Fears of Being Uploaded In-Reply-To: References: <758269.36155.qm@web36505.mail.mud.yahoo.com> <580930c21001230919h314a017coe68b9bf06f4cc792@mail.gmail.com> <580930c21001231613i3440dfecr2de59be8bd0cc496@mail.gmail.com> Message-ID: <580930c21001240532y33e6d605m99e781524b5f975@mail.gmail.com> On 24 January 2010 01:37, Stathis Papaioannou wrote: > 2010/1/24 Stefano Vaj : >> I beg to differ. A gun held to one's head, or any other proximate >> chance for physical threat, often indicated by pain, simply unleash >> one's gene whisper to run away as fast as one can. Exactly for the >> same reasons why a fruitfly does not dive into water or a rat likes >> sex. Because those who are inclined otherwise would not leave much >> offspring. > > That's the reason, certainly. There is nothing fundamentally wrong > with being dead. Still, even though I have this insight, I don't want > to be free of the manipulation; and this of course is part of the > manipulation. I am not saying that I want. Just wondering whether the survival instinct of a nematode really depends on the alleged "consciousness" emerging from the ineffable features of its organic brain... BTW, I agree on the other hand that the ideas according to which a sufficiently "intelligent" computer would automagically express survival istincts, will of power, sense of "identity", perception/illusion of "consciousness" and other obviously antropomorphic features and idiosincrasies are simply projections. A computer, no matter how "intelligent", would do that only if it is expressly programmed and designed to emulate such features - and/or if it is the final product of an evolutionary history progressively selecting those traits amongst random variants. Thus, I am not much sanguine on the concept itself of AGI. There is nothing general in our own "intelligence", besides the universal computational abilities we share with innumerable non-human and non-organic systems, and a fully-persuasive, Turing-test-level, human-like "entity" running on a PC will be IMHO by definition either an uploaded human, or an arbitrary mix of human beings' characters. -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 24 13:36:18 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 14:36:18 +0100 Subject: [ExI] Brains, Computers, AI and Uploading In-Reply-To: References: <580930c21001230951w32d675ecke839a988fad98aa2@mail.gmail.com> <580930c21001231620i3b2491d5geb42fb8d744d949c@mail.gmail.com> Message-ID: <580930c21001240536s5267a0fek641bf8a1a5ae50e1@mail.gmail.com> On 24 January 2010 01:43, Stathis Papaioannou wrote: > 2010/1/24 Stefano Vaj : > Yes, I agree with this, but people like Searle assert that everything > apart from consciousness is computable, and this needs specific > rebuttal. This would confirm that consciousness cannot and does not exist in our universe. Which of course is the alternative, but ultimately indifferent, way of seeing things ("zombies cannot exist, everybody is a zombie"). -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 24 13:46:16 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 14:46:16 +0100 Subject: [ExI] heaves a long broken psi In-Reply-To: <4B5B9460.7060005@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> Message-ID: <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> On 24 January 2010 01:29, Damien Broderick wrote: > On 1/23/2010 6:06 PM, Stefano Vaj wrote: >> I am quite persuaded that if you try and guess a >> billion time a playing card you guess slightly more often than would >> be statistically warranted > > Experience indicates that this isn't true, actually, because boredom crushes > whatever is invoked by psi experiments, as it does with imagination, sex, > and many other human responses. You do better getting a million people to > make 1000 guesses each, or 10 million to make100. Yes, whatever... :-) No, actually I was watching Ghostbusters with my son the other day, and I was wondering whether a mechanism of punishment-reward could actually improve the scores of participants to such experiments. Something else which came up to my mind is whether different species of animals would attain similar (better, worse) score in "telepathy" experiments... > This is often said dismissively, but I disagree. People turn to QT because > it is the best theory available for how the real world works, and it > sometimes turns out that interpretations of QT appear to be consilient with > at least some psi anomalies, far more so than with the predictions of > classic physics models. For example, entanglement does act in ways that > utterly defy traditional notions of separability. Indeed. But doesn't the theory expressly exclude that entanglement can be profited from in terms of information exchange? Moreover, wouldn't any "ordinary", albeit unknown/unclear, way of data transmission work equally well, at least with regard to the PSI anomalies that are the most likely to correspond to actual, repeatable phenomena. as in telepathy between two human subjects in plain view of each other? -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 24 13:48:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 14:48:41 +0100 Subject: [ExI] quantum brains In-Reply-To: <761294.98627.qm@web65601.mail.ac4.yahoo.com> References: <761294.98627.qm@web65601.mail.ac4.yahoo.com> Message-ID: <580930c21001240548g44e00fdfn32d8af8c88ceadd2@mail.gmail.com> On 24 January 2010 02:07, The Avantguardian wrote: > I don't care what Penrose thinks except insofar as I agree that QM may play a role in brain function. If we live in a quantum world, it could not be otherwise. What I am wary of are claims that this would be the case for brains in some essentially different sense from that of QM playing a role in a liver, PC, or engine function. -- Stefano Vaj From pharos at gmail.com Sun Jan 24 14:43:42 2010 From: pharos at gmail.com (BillK) Date: Sun, 24 Jan 2010 14:43:42 +0000 Subject: [ExI] heaves a long broken psi In-Reply-To: <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> Message-ID: On 1/24/10, Stefano Vaj wrote: > Indeed. But doesn't the theory expressly exclude that entanglement can > be profited from in terms of information exchange? Moreover, wouldn't > any "ordinary", albeit unknown/unclear, way of data transmission work > equally well, at least with regard to the PSI anomalies that are the > most likely to correspond to actual, repeatable phenomena. as in > telepathy between two human subjects in plain view of each other? > > The point about humans guessing at random is that the human brain doesn't do 'random'. The brain is always looking for patterns, even where none exist. In psi tests the brain is continuously making up stories, like the ball 'must' land on red next, or tails is expected now, or the next symbol must be a star. If you give the brain a random list, it is most unlikely that the human will guess correctly at the expected chance (random) level. Either because the test was too short. Because the expected chance level is only achieved over long durations tests to remove random fluctuations. Or the human was making up patterns of guesses (it has to - that's the way it works) and the patterns don't match a randomized list. They will be better or worse. Or you can keep analysing the guesses, matching the one before or the one after, or last weeks guesses with this weeks tests, etc. etc. desperately thrashing around until you find something that you could call psi. I call it random. That's why the tests are not repeatable. BillK From avantguardian2020 at yahoo.com Sun Jan 24 14:39:42 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 24 Jan 2010 06:39:42 -0800 (PST) Subject: [ExI] quantum brains In-Reply-To: <580930c21001240548g44e00fdfn32d8af8c88ceadd2@mail.gmail.com> References: <761294.98627.qm@web65601.mail.ac4.yahoo.com> <580930c21001240548g44e00fdfn32d8af8c88ceadd2@mail.gmail.com> Message-ID: <363690.50122.qm@web65612.mail.ac4.yahoo.com> ? ----- Original Message ---- > From: Stefano Vaj > To: ExI chat list > Sent: Sun, January 24, 2010 5:48:41 AM > Subject: Re: [ExI] quantum brains > > On 24 January 2010 02:07, The Avantguardian wrote: > > I don't care what Penrose thinks except insofar as I agree that QM may play a > role in brain function. > > If we live in a quantum world, it could not be otherwise. > > What I am wary of are claims that this would be the case for brains > in some essentially different sense from that of QM playing a role in > a liver, PC, or engine function. In so far as I know, current PCs and engines are not designed to utilize QM to achieve any greater functionality. Livers and brains however were optimized over hundreds of millions of years of evolution to?exploit every possible adaptive advantage from physical law and thus could achieve greater functionality by harnessing quantum effects. So I guess?any QM effects would be a "bug" in a PC?or engine but could be a "feature" in livers and brains. Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From scerir at libero.it Sun Jan 24 16:10:26 2010 From: scerir at libero.it (scerir) Date: Sun, 24 Jan 2010 17:10:26 +0100 (CET) Subject: [ExI] heaves a long broken psi Message-ID: <23014997.177151264349426333.JavaMail.defaultUser@defaultHost> [Jeff] What a pleasure this is. I've long admired the quality of your physics savvy. And I look forward to responding to your comment. Problem is: how to disentangle the whimsy indicated by the two smiley's, from the serious. Anyway, here goes... First, I loved the "there is no passion at [a] 'distance'." I'm not at all sure what that means, but I love it, and I'm gonna let it simmer at length in my subconscious, cause "I need the eggs". Whenever a techno-weenie, er,... technophile summons passion into the techno-whoopie, well, I mean,...science fired by passion? Yeah, baby, I'm for that. (It's the Italian thing, isn't it? The 'passion' connection....) [s.] Well ... while 'action at a distance' appears as something brutal, like the possibility of sending FTL signals, or stuff, or energies, 'passion at a distance' appears to be a more gentle concept, like the possibility of FTL reciprocal 'influences' between the two space-like separated entangled particles, or the possibility of FTL hidden communications 'between' the two entangled particles (human FTL communication being excluded). Speaking of these things it seems important to trace-out the instrumental measurements, since there are lands, in the universe, in which there are no instruments, no men, no measurements, and no many-worlders, but there must be the entanglement for sure. But (as Mermin says) there is the possibilty of 'fashion at a distance'. In other words, both entangled particles (space-like separated) act and feel passion at the same time, a-causally and a-temporally, since it is impossible to say - at least for us humans and being the particles space-like separated - which one acts first, or feels passion first, and which then (see the so called 'before-before' and 'after-after' Geneva experiments). In other words ... it does not make sense to speak of 'action at a distance' or 'passion at a distance'. The only actual concept being that of 'non-separability' of entangled parties. [Jeff] Okay, so now let me take a few deep breaths and calm down. "Self- replicating machine systems, ...self-replicating machine systems,...self- replicating machine systems,...etc.")(It's my mantra.Takes me to my 'happy place'...) Ok. I'm all calm now. You wrote: So, for the bi-waves, there is no action at 'distance'... " Uh,... well, yeah. You start with, "...very close but no." But I don't see where we disagree, cause that's exactly what I was saying. There is no action at a distance. I must have communicated poorly. Regarding wave-particle duality, ok, I've heard of that. As I understand it, all the sparkling bits in our universe can be described using two equally valid formulations, the classical which gives us particles, and the quantum which gives us waves. And it was my impression there is this idea that until observed/measured, any given bit is somehow "indeterminate", existing as both particle and wave, and in all acceptable configurations at once, but only in some probabilistic potentiality. Part and parcel of quantum weirdness. Past my hat size. Makes my head hurt. [s.] Yes, there is a smooth transition between the particle-like nature and the wave-like. The more you pretend to know, or, to say it better, the more it is possible to know (in experiments, like the 'which-path', etc.) the more the wave-like nature vanishes. 'In an experiment the [quantum] state reflects not what is actually known about the system, but rather what is knowable, in principle, with the help of auxiliary measurements that do not disturb the original experiment. By focusing on what is knowable in principle, and treating what is known as largely irrelevant, one completely avoids the anthropomorphism and any reference to consciousness that some physicists have tried to inject into quantum mechanics.' -Leonard Mandel (Rev. Mod. Phys.,1999, p. S-274) But - speaking in general, eh! - it is not a mechanical effect, it is not a problem of a material perturbation, it is not disturbance. It is something like a principle of limited, finite, available quantity of information. It is not possible to know more than 'that'. If you extract 'that' quantity of information, or even when it is in principle possible to extract 'that' quantity of information well ... you are done. 'The superposition of amplitudes is only valid if there is no way to know, even in principle, which path the particle took. It is important to realize that this does not imply that an observer actually takes note of what happens. It is sufficient to destroy the interference pattern, if the path information is accessible in principle from the experiment or even if it is dispersed in the environment and beyond any technical possibility to be recovered, but in principle "still out there".' -Anton Zeilinger (Rev. Mod. Phys.,1999, p. S- 288) [Jeff] But here's the deal. If you send two entangled photons off in different directions -- the Aspect experiment? -- the experimental apparatus already allows you to treat them as particles. How else can you send them off in different directions? So the bi-wave is already (or is it 'still'?) also a bi-particle. And isn't this, in fact, generally the case? Can't I take any photon/photon wave (or massy baryonic or leptonic particle) at any time and describe it with equal validity using either the particle or wave protocol? That is, can't I put it in a box and describe it as a wave, open the box, look in, and describe it as a particle, put my hands over my eyes, treat the room as just a big box, and keeping my eyes closed, describe it as a wave again. Isn't the wave-particle duality a 24/7 thing? Always on, so to speak, ready at every instant for a "reset" to simultaneous wave particle potentiality? [s.] You can describe things using the particle picture, or the wave picture, or both (Bohmian mechanics), or Feynman's paths. Or you can use the quantum fields formalism (in general this is the choice). The wave picture sometimes is difficult, for conceptual reasons (ie, what they are made of), but sometimes the description is simpler with waves. But we do not know (before measurements) if they are particles, waves, or fields, or ... all of them. (Feynman knew they were particles, and only particles.) But yes, during an experiment - i.e. a two-particle interference exp. with two entangled particles - you can erase the information you already got (i.e. about the which-way of a specific particle) and restore the wave-like nature of that specific particle. Again, there is no 'disturbance' effect here, since you are using joint observables like polarization and position, which of course commute. (Using other techniques - weak measurements - you can also 'undo' a measurement, or it seems so.) [Jeff] But I digress... So you have these two photonic bits, produced by an experimental apparatus that allows us to assert with some confidence certain particle-associated parameters -- ie position and velocity in 3D space -- (which it seems (to me) we could cover our eyes and reformulate into wave- associated parameters, cause physics takes no time outs), and we say "That one's over there", pointing off in the distance, "and the other one is over here", pointing to a photon trap on the laboratory bench. And now you are prepared to go to the photon trap and slap the captured photon around a bit until it (and it's 'distant' accomplice) decohere/disentangle. Alternatively -- in recognition of the semantic challenges which confront us -- I could say, "Disturb the "photon trap" until the single quantum waveform which constitutes the "not-two" "not particles"collapses and spits out the result: two particles localized in 3D space, or two new, no-longer-entangled quantum waveforms INSTANTANEOUSLY extending to the furthest reaches of 3D space, and whose waveforms might very well be mathematically superposed and seen yet again as a single waveform -- a component in the greater single waveform which is the entire universe. Damn!, there I go digressing again. Okay, okay. (Takes a deep breath and gets back to it.) So you wrote: " Possibly, for the bi-waves, there is no even 'distance' " Yes, yes, that was precisely my point, but you threw me off with that "...very close, but no." In our 4D realm everything that manifests bears the features of 4D's. Can't escape that IN THIS REALM. In fact, I'm inclined to posit that particle-ness can't exist without spatial and temporal separation. Classical physics is 4D spacetimey-ness. So your bi-waves can't exist in this realm without bearing the marks of and conforming to the rules of 4D spacetime. Those marks are two distinct photons at a known measurable distance, and with no possibility of intimate instantaneous contact (I assert). Ergo, in our 4D spacetime, instantaneous action across distance must be be mediated by and in other dimensions. I avoided using the term "in contact" above, because "in contact" is 'spatial'. It implies 4D. Whereas, in other dimensions, what can be said of space and time? Perhaps nothing. We have space and time in our 4D's, and are much pleased, but that shouldn't prejudice us. No reason to project what we know onto what we don't. And hey, they're OTHER dimensions, they should be different, not some boring rehash of what's already been done. (Okay, that's not really a 'logical' argument, more like an aesthetic one.) So if you want to assert that the bi-waves know no separation, know no 'distance', I'm ready to entertain that notion so long as it is restricted to some extra-dimensional context where the "rules" of existence may not need, may in fact exclude what we experience as space and time. Okay, I've embarrassed myself enough for one day. Best, Jeff Davis [s.] Very close again. But there is another path, to be explored. We have space-time theories, which do not allow the possibility of FTL signaling (random FTL signaling or tachyons and other pathologies are perhaps allowed, but let us skip that). We have quantum principles (like the No-Cloning, the Linearity, maybe the Uncertainty Principle, etc.) which also - and independently - do not allow FTL signaling. So, there is the so called "Peaceful Coexistence" between quantum & relativity. But we also have that weird non-locality (rectius non-separability) between space-like separated parties, which is difficult to *explain* in the framework of space-time theories, and which is also (ie, according to John Bell) a huge violation of the "spirit" of Special Relativity. Thus, one have to introduce at least another 'landscape'. It is then possible that reality we experience is a superposition of two things. A sort of Quantum Operating System and the usual physical space-time scenario. The Quantum Operating System operates not in the real space, or space-time, but in a specific abstract space. In this abstract space there is linear algebra, there is superposition, there are no physical distances, thus there is non-separability between entangled parties. When we look at monitors in front of us we cannot see what/when/why/how the CPU and the operating system are processing at the same time, but we know there is a CPU at work. ... When we look at events in the space-time scenario we do not see what the Quantum Operating System is, behind that curtain, processing at the same time. From scerir at libero.it Sun Jan 24 16:32:18 2010 From: scerir at libero.it (scerir) Date: Sun, 24 Jan 2010 17:32:18 +0100 (CET) Subject: [ExI] quantum brains Message-ID: <19343780.178901264350738516.JavaMail.defaultUser@defaultHost> > On Mon, Jan 18, 2010 at 11:14 PM, scerir wrote: >> And of course since I'm persuaded that some psi phenomena are real, *something* >> weird as shit is needed to account for them, something that can either do >> stupendous simulations in multiple worlds/superposed states, or can modify its >> state according to outcomes in the future. If that's not QM, it's something >> equally hair-raising that electronic >> computers aren't built to do. > This is certainly a possibility, but I would not rule out the > possibility that classical (non-quantum) systems can still generate > really weird shit. Think of Wolfram's work. > Giulio Prisco I can agree. But I did not write the above. Damien did, I suppose. :-) From thespike at satx.rr.com Sun Jan 24 16:55:53 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 24 Jan 2010 10:55:53 -0600 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> Message-ID: <4B5C7B99.6050204@satx.rr.com> On 1/24/2010 8:43 AM, BillK wrote: > If you give the brain a random list, it is most unlikely that the > human will guess correctly at the expected chance (random) level. > Either because the test was too short. Because the expected chance > level is only achieved over long durations tests to remove random > fluctuations. > Or the human was making up patterns of guesses (it has to - that's the > way it works) and the patterns don't match a randomized list. They > will be better or worse. Hardly anyone uses iterated guessing protocols any more (remote viewing protocols are far more informative, because they contain high entropy gradient targets and are better fitted to the way the mind looks for meaningful structures), but there is a huge data base of just such experiments compiled at Princeton and elsewhere. The existence of preference patterns in unmotivated calls is one of the first things established in such experimental runs, and the most interesting aspect to look for is deviations from such individual or population biases. I looked back at a lot of data accumulated between the 1930s and 1950s, comparing calls against the background preference patterns, looking for the proportion of guesses when a particular option is target and comparing that vote with the proportion it got when not a target. That is, the comparison is not made against "expected chance (random) level" but against an internal control that tracks non-random preferences. Since the target list is random and the choices are made blindly, there is no apparent way in which "when target" scores can be significantly deviant from "when not-target" scores--yet they are, to a significant extent. > I call it random. That's why the tests are not repeatable. I see. First you explain that psi tests are bound to give results that deviate from chance because the mind produces patterned or skewed or non-random streams of calls, and these inevitably match better or worse than m.c.e. against a short randomized list, which easily accounts for the non-chance results of parapsychologists--and then you explain that this is why such tests can never be replicated by non-parapsychologists! If what you say were valid, anyone who tries this should get results at the same level, including the most ruthless skeptics, because it's just an artifact, right? But of course it is only those gullible parapsychologists who do so. So we're driven back to John-Clark-type "they just made it all up" or "they all cheated" theories. Damien Broderick From gts_2000 at yahoo.com Sun Jan 24 17:12:08 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 24 Jan 2010 09:12:08 -0800 (PST) Subject: [ExI] Brains, Computers, AI and Uploading In-Reply-To: Message-ID: <391286.82869.qm@web36505.mail.mud.yahoo.com> --- On Sat, 1/23/10, Stathis Papaioannou wrote: > Yes, I agree with this, but people like Searle assert that > everything apart from consciousness is computable Not really an accurate characterization. Better would be "Searle asserts that everything about the brain is computable but that a computation of a brain is not a brain." Again, digital simulations of things do not equal the things they simulate. Exceptions to this rule may be found in the digital world of software and hardware, but in that case I call them copies (or perhaps emulations as Damien likes to say), not simulations. IFF human brains exist in actual fact as digital computers then computations (simulations) of them will have consciousness. But Searle makes a strong case, one that I find compelling, that actual human brains do not exist as digital computers. -gts From stefano.vaj at gmail.com Sun Jan 24 18:06:34 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 19:06:34 +0100 Subject: [ExI] Brains, Computers, AI and Uploading In-Reply-To: <391286.82869.qm@web36505.mail.mud.yahoo.com> References: <391286.82869.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21001241006q3d5ed6ddufc0d1ff0ba7128bb@mail.gmail.com> On 24 January 2010 18:12, Gordon Swobe wrote: >But Searle makes a strong case, one that I find compelling, that actual human brains do not exist as digital computers. Which, after the persuasive arguments made in the indicated essay, would indicate that actual human brains would not exist in this universe. A much stranger conclusion than than the fact, simply obscured by a few centuries of philosophical dualism, that "consciousness" has no really different ontological status, from, say, digestion or popular sovereignty or any other process. -- Stefano Vaj From jonkc at bellsouth.net Sun Jan 24 18:35:11 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 24 Jan 2010 13:35:11 -0500 Subject: [ExI] digital simulations, descriptions and copies. In-Reply-To: <353514.63292.qm@web36508.mail.mud.yahoo.com> References: <353514.63292.qm@web36508.mail.mud.yahoo.com> Message-ID: <71D59D2C-0664-400C-9859-9D46311A44BF@bellsouth.net> Since my last post Gordon Swobe has written 9, I shall write one. > Searle asserts that everything about the brain is computable but that a computation of a brain is not a brain Well of course it's not a brain, it's a mind. Minds are important, brains aren't. > I would consider an emulation of an apple a copy, and yes we cannot create emulations of apples, I already covered that subject in my first post in this thread. You act as if your post was the Talmud and had forever settled the question, when in reality it's just another in a endless series of pointless thought experiments which implicitly or explicitly always contains the line "and obviously there can't be any consciousness there so..."; and even at their best they can't hope to prove anything except that I Gordon Swobe am puzzled by the link between intelligence and consciousness. In not one of your dozens of thought experiments or hundreds of posts have you been able to explain away the astronomical amount of evidence showing that, understand it or not, there is indeed such a link. > people say that digitally simulated people can eat and taste digitally simulated apples. Eric has come forth to defend that idea. Will you defend it too? Yep. > > the simulated thermostat will not regulate temperature in a real room without adding hardware No thermostat, simulated or otherwise, will regulate temperature in a room without adding other hardware. And all thermostats are simulations, so what is a simulated thermostat, a simulation of a simulation? Don't you think this is getting a little silly? > Digital simulations of non-digital objects never equal the things they simulate Who cares? I'm not talking about objects, I'm not even talking about nouns, I'm talking about people. > digital simulations of things do not equal the things they simulate. Exceptions to this rule may be found in the digital world of software and hardware That's all the exception I need. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Jan 24 19:28:16 2010 From: pharos at gmail.com (BillK) Date: Sun, 24 Jan 2010 19:28:16 +0000 Subject: [ExI] heaves a long broken psi In-Reply-To: <4B5C7B99.6050204@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> Message-ID: On 1/24/10, Damien Broderick wrote: > I see. First you explain that psi tests are bound to give results that > deviate from chance because the mind produces patterned or skewed or > non-random streams of calls, and these inevitably match better or worse than > m.c.e. against a short randomized list, which easily accounts for the > non-chance results of parapsychologists--and then you explain that this is > why such tests can never be replicated by non-parapsychologists! If what you > say were valid, anyone who tries this should get results at the same level, > including the most ruthless skeptics, because it's just an artifact, right? > > Slight misunderstanding here. I meant that, of course, the tests can be done by anyone, although it's not as simple to set up as it might appear. The second 'random' was meant to refer to the fact that sometimes you get results above chance level and sometimes you get results below chance level. And, assuming all the strict controls are in place and actually work properly, that has nothing to do with psi but is due to the unpredictability of the universe we live in. BillK From jrd1415 at gmail.com Sun Jan 24 19:36:20 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 24 Jan 2010 12:36:20 -0700 Subject: [ExI] heaves a long broken psi In-Reply-To: <589142.44940.qm@web65611.mail.ac4.yahoo.com> References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> <589142.44940.qm@web65611.mail.ac4.yahoo.com> Message-ID: On 1/23/10, The Avantguardian wrote: > FWIW, when you are talking about photons even at the other end of the > physics i.e. special relativity, they don't experince the same 4D space-time > that we do. Simililar to Serafino's observation, the dimesionality of the > universe is not increased but is instead reduced. (snip crucially on point Lorentz/relativity math) > Thus in their own > frames of reference, photons from a source never leave the source, and their > source and their destination is one and the same. They don't move, because > they have no space or time with which to move within. They are always in > contact in the same point in 2D space at the same instant in time even if > from the reference frame of slow moving matter their source and destination > is separated by millions of light-years. Thank you Stuart. I spent a couple of hours on my post yesterday, and just couldn't get to -- Serafino, is it? -- Serafino's last point about reduced dimensionality. So, yes, yes, yes, and thank you for commenting almost precisely as I had wanted to. Many years ago I took that ride on a photon, so that I might see what the photon sees. And with the help of Herr Lorentz, I "saw" what you describe. But, as is so often the case, this "vision" fueled more questions, in particular one big thorny bad boy -- ie a good one. ... But... bear with me here for crucial digression... Yesterday, thinking about the photon ride as I have so often in the past, I came upon something new. Originally, the photon ride was exclusively a thought experiment. It had to be, ***of course***, because the photon is massless, and I am not. But then, yesterday, it dawned on me -- arising from our recent discussions of mind, consciousness, and substrates bio and non -- that the "mind" has no mass (excepting of course like that of the photon, where hv=E=mc*2), and thus, theoretically, I COULD take that ride. But then the questions burst forth. The pure energy form of mind, unburdened by the messy massy-ness of its substrate, is so photon-like, that we might (must?) presume that it penetrates instantaneously to fill the entire universe. But wait, on further reflection it seems that it is already there. What's more, photon-mounted or not, the mind was and is always photon-like, and so must ever and always saturate all of spacetime. Also, annoyingly, it seems the photon-riding mind cannot function, because mind is dynamic and needs the passage of time to proceed from state to state. So what would be the character of the photon-riding mind, existing as surely as does the photon, yet "frozen" out of time? Now we see the warring duality of the two frames of reference. From the photon-rider's point of view, space is collapsed, time is stopped, existence is possible, but thinking is not. While in the 4D view, everything is schmushed out into our tasty raisin-cake universe, and the photon and mind are humping right along, with the mind on pause, its relativistic "clock" slowed to zero at v=c. Yet the mind exists in both, simultaneously, immune to the contradictions. So the contradictions must be illusory, awaiting reconciliation. I could go on forever, the questions multiply faster than bunnies, but one overriding question emerges from the mist. LET us go then, you and I, When the evening is spread out against the sky Like a patient etherised upon a table; Let us go, through certain half-deserted streets, The muttering retreats Of restless nights in one-night cheap hotels And sawdust restaurants with oyster-shells: Streets that follow like a tedious argument Of insidious intent To lead you to an overwhelming question ? Oh, do not ask, ?What is it?? Let us go and make our visit. Returning now to our regular programming... So how can you have a photon in 4D that starts out not existing, is then created, transits time and space, interacts, and then ceases to exist, while at the same time (ick!) have the Lorentz-predictive view, where the photon can't move and is eternal, and have the two be consistent? Somethin's gotta give. The above is a horrifying mish mash and I should just delete it. But I'm too lazy. You can delete it for me. Enough embarrassment for one day. Best, Jeff Davis Aspiring Transhuman / Delusional Ape (Take your pick) Nicq MacDonald From thespike at satx.rr.com Sun Jan 24 20:00:52 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 24 Jan 2010 14:00:52 -0600 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> Message-ID: <4B5CA6F4.5040506@satx.rr.com> On 1/24/2010 1:28 PM, BillK wrote: > The second 'random' was meant to refer to the fact that > sometimes you get results above chance level and sometimes you get > results below chance level. And, assuming all the strict controls are > in place and actually work properly, that has nothing to do with psi > but is due to the unpredictability of the universe we live in. This is untrue, though. The whole point of using statistics in science is that with sufficient data points, random correlations can be disambiguated from the non-random. Do you really suppose that parapsychologists with PhDs and professorships in statistics or physics have never noticed the elementary facts you're pointing at? One of the most rudimentary controls is to match a string of calls not only against the actual target array but against strings of equal length with different arbitrary "targets"--and waddaya know, those control comparisons turn out to agree closely with chance expectation. Psi studies don't just look for "results above or below chance level" in some vapid "oh look, it's not exactly 0.20 right, it must be magic!" The word *significant* has a technical meaning (as you surely know). If a subset of trials deviated significantly from m.c.e. because the mind's propensity to form patterns happens by chance to match the random target string for a few minutes, it will soon stop doing so (and hence balance out back within one sigma, two-tailed, or so) if there is nothing forcing the correspondence. That's true whether you're looking for psi or the effectivity of aspirin in heart attack prevention (which has a comparable low effect size, as demonstrated by Prof. Jessica Utts). Damien Broderick From spike66 at att.net Sun Jan 24 20:05:14 2010 From: spike66 at att.net (spike) Date: Sun, 24 Jan 2010 12:05:14 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost><589142.44940.qm@web65611.mail.ac4.yahoo.com> Message-ID: <4B305952E3444B348E69ADEB3F2A7FB6@spike> > ...On Behalf Of Jeff Davis > ... > To lead you to an overwhelming question . > Oh, do not ask, "What is it?" > Let us go and make our visit. ... > Best, Jeff Davis Jeff I like your poetry better than mine. > Aspiring Transhuman / Delusional Ape (Take your pick) > Nicq MacDonald > _______________________________________________ Where is Nicq MacDonald these days? She hasn't posted for years. Anyone here buddies with her? Do pass along the invite to drop in and say hello to old pals down here in ExI-chat. spike From msd001 at gmail.com Sun Jan 24 20:19:22 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 24 Jan 2010 15:19:22 -0500 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> <589142.44940.qm@web65611.mail.ac4.yahoo.com> Message-ID: <62c14241001241219u7fd09389r6a79f01309624548@mail.gmail.com> On Sun, Jan 24, 2010 at 2:36 PM, Jeff Davis wrote: > questions burst forth. ?The pure energy form of mind, unburdened by > the messy massy-ness of its substrate, is so photon-like, that we > might (must?) presume that it penetrates instantaneously to fill the > entire universe. ?But wait, on further reflection it seems that it is > already there. ?What's more, photon-mounted or not, the mind was and > is always photon-like, and so must ever and always saturate all of > spacetime. ? Also, annoyingly, it seems the photon-riding mind cannot > function, because mind is dynamic and needs the passage of time to > proceed from state to state. ?So what would be the character of the > photon-riding mind, existing as surely as does the photon, yet > "frozen" out of time? > > ?Now we see the warring duality of the two frames of reference. ?From > the photon-rider's point of view, space is collapsed, time is stopped, > existence is possible, but thinking is not. ? While in the 4D view, > everything is schmushed out into our tasty raisin-cake universe, and > the photon and mind are humping right along, with the mind on pause, > its relativistic "clock" slowed to zero at v=c. Yet the mind exists in > both, simultaneously, immune to the contradictions. ?So the > contradictions must be illusory, awaiting reconciliation. I imagine the 10 (11?) or 26 dimensions of string theory having a few extra dimensions (ultimately, universes) to manage this frozen-state problem. Take any frame of a movie with which you are familiar - what is "happening" at that moment. The frame has zero happening, but your understanding of the context of the movie can identify it's location in the plot and 'happening' you could validly discuss is a vector through that plot at which this frozen state is only a particular derivative. I'm certainly no physicist, but I imagine an oft-overlooked point on the 'rolled up' dimensions of string theory is that these extra non-spacetime dimensions may have measure less than 1 planck unit, but that doesn't mean they don't contain multivariant data. (ex: noninclusive range from -1 to 1 are less than abs(1) but contain an infinite number of real values) back to the movie analogy, consider that when spacetime is 'reduced' out of consideration (as you mentioned with your photon ride leading to omnipresent unity) those other dimensions retain the details of 'context' for your mental movie. I also imagine that these other dimensions are the DNA for the universe we perceive. Perhaps 'fitness' for evolution is the intelligence of life produced in the entire range of spacetime for that universe. I wonder if it is possible to move between them analogously to getting off that photon-ride at the wrong station... From thespike at satx.rr.com Sun Jan 24 20:27:04 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 24 Jan 2010 14:27:04 -0600 Subject: [ExI] what? what? In-Reply-To: <4B305952E3444B348E69ADEB3F2A7FB6@spike> References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost><589142.44940.qm@web65611.mail.ac4.yahoo.com> <4B305952E3444B348E69ADEB3F2A7FB6@spike> Message-ID: <4B5CAD18.70206@satx.rr.com> On 1/24/2010 2:05 PM, spike wrote: >> To lead you to an overwhelming question . >> > Oh, do not ask, "What is it?" >> > Let us go and make our visit. > ... >> > Best, Jeff Davis > > Jeff I like your poetry better than mine. That's T. S. Eliot's poetry. > Where is Nicq MacDonald these days? She hasn't posted for years. She? From thespike at satx.rr.com Sun Jan 24 20:46:12 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 24 Jan 2010 14:46:12 -0600 Subject: [ExI] photons, both here and there In-Reply-To: <589142.44940.qm@web65611.mail.ac4.yahoo.com> References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost> <589142.44940.qm@web65611.mail.ac4.yahoo.com> Message-ID: <4B5CB194.3040107@satx.rr.com> On 1/23/10, The Avantguardian wrote: > Thus in their own > frames of reference, photons from a source never leave the source, and their > source and their destination is one and the same. They don't move, because > they have no space or time with which to move within. They are always in > contact in the same point in 2D space at the same instant in time even if > from the reference frame of slow moving matter their source and destination > is separated by millions of light-years. Ah ha, in my recent ASIMOV'S story, "This wind blowing, and this tide", my character muses: < This thing on Titan had been tugging at me, at my absurd and uncomfortable and highly classified gift, since I was four or five years old, running in the streets of Seoul, playing with a Red Devils soccer ball and picking up English and math. A suitable metaphor for the way a child might register the substrate of a mad universe, and twist its tail. My own son, little Song-Dam, plagued me with questions when he, too, was a kid, no older than I'd been when the starship buried under tons of frozen methane and ethane had plucked for the first time at my stringy loops. "If light's a wave, Daddy, can I surf on it?" Brilliant, lovely child! "No, darling son," I said. "Well, not exactly. It's more like a Mexican football wave, it's more like an explosion of excitement that blows up." I pulled a big-eyed face and flung my arms in the air and dropped them down. "Boom!" Song laughed, but then his mouth drooped. "If it's a wave, Dad, why do some people say it's made of packets?" "Well," said I, "you know that a football wave is made of lots and lots of team supporters, jumping up and sitting down again." He wasn't satisfied, and neither was I, but the kid was only five years old. Later, I thought of that wave, sort of not there at all at one end, then plumping up in the middle, falling to nothing again as it moved on. Follow it around the bleachers and you've got a waveform particle moving fast. Kind of. But for a real photon, you needn't follow it, it's already there, its onboard time is crushed and compressed from the moment of launch to the final absorption, just one instantaneous blip in a flattened, timeless universe. Why, you could jump to the Moon, or Ganymede, or even Titan, all in a flash. Just entangle yourself with it, if you knew how (as I showed them how, much later), like Mr. Meagle remote viewing his impenetrable stationary starship.> Damien Broderick From thespike at satx.rr.com Sun Jan 24 20:51:33 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 24 Jan 2010 14:51:33 -0600 Subject: [ExI] Broken Time Translation Symmetry and State Reduction Message-ID: <4B5CB2D5.8070604@satx.rr.com> This might interest some here (Serafino?): Broken Time Translation Symmetry as a model for Quantum State Reduction Authors: Jasper van Wezel (Submitted on 21 Dec 2009) Abstract: The symmetries that govern the laws of nature can be spontaneously broken, enabling the occurrence of ordered states. Crystals arise from the breaking of translation symmetry, magnets from broken spin rotation symmetry and massive particles break a phase rotation symmetry. Time translation symmetry can be spontaneously broken in exactly the same way. The order associated with this form of spontaneous symmetry breaking is characterised by the emergence of quantum state reduction: systems which spontaneously break time translation symmetry act as ideal measurement machines. In this review the breaking of time translation symmetry is first compared to that of other symmetries such as spatial translations and rotations. It is then discussed how broken time translation symmetry gives rise to the process of quantum state reduction and how it generates a pointer basis, Born's rule, etc. After a comparison between this model and alternative approaches to the problem of quantum state reduction, the experimental implications and possible tests of broken time translation symmetry in realistic experimental settings are discussed. ---- A commentator notes: "Broken time translation symmetry is code for non-unitarity, of course." Which is usually regarded as a no-no. Damien Broderick From stefano.vaj at gmail.com Sun Jan 24 21:01:16 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 24 Jan 2010 22:01:16 +0100 Subject: [ExI] heaves a long broken psi In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> Message-ID: <580930c21001241301j59fd045bn9bd6454833b6867@mail.gmail.com> On 24 January 2010 15:43, BillK wrote: > Or you can keep analysing the guesses, matching the one before or the > one after, or last weeks guesses with this weeks tests, etc. etc. > desperately thrashing around until you find something that you could > call psi. ?I call it random. That's why the tests are not repeatable. My assumption is of course that they in fact are. More or less as double-blind drug test, even though with a much lower discrepancy (say, 0,0001% more guesses than would be warranted by statistics?) and an even lower consistency from one study to another. But pointing in the same direction. But hey, I never played the "guess-the-card" game myself nor do I claim to have actually performed any metastudy... I simply take Damien and other people having examined the issue more in depth at their word. Much more perplexing and anedoctical sound the stories about drawing the blueprints of a foreign secret base, etc. -- Stefano Vaj From bbenzai at yahoo.com Sun Jan 24 20:50:53 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 24 Jan 2010 12:50:53 -0800 (PST) Subject: [ExI] Gmail has ads!!! In-Reply-To: Message-ID: <620588.10605.qm@web113618.mail.gq1.yahoo.com> BillK exclaimed: > Gmail has ads!!!???I never knew. > > I've used gmail for over 5 years and never seen any of > their ads. > Either Adblock Plus or CustomizeGoogle (Firefox add-ons) > will make > them disappear, and I use both. > > I suspected there was something missing in that big blank > space at the > right hand side - now I know.???:) Yes, browsing is an entirely different (and intensely irritating) experience if you can't use Firefox, or some similar Adblock-capable browser. I see the constant struggle between ads and blockers as similar to the evolution of parasites and immune systems. (Same with spam, viruses, spyware, DRM, etc.). It'll be interesting to see what evolves over time, especially when this technology starts migrating inwards to our brains. Ben Zaiboc From thespike at satx.rr.com Sun Jan 24 21:36:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 24 Jan 2010 15:36:57 -0600 Subject: [ExI] heaves a long broken psi In-Reply-To: <580930c21001241301j59fd045bn9bd6454833b6867@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <580930c21001241301j59fd045bn9bd6454833b6867@mail.gmail.com> Message-ID: <4B5CBD79.1090309@satx.rr.com> On 1/24/2010 3:01 PM, Stefano Vaj wrote: > Much more perplexing and anedoctical sound the stories about drawing > the blueprints of a foreign secret base, etc. Eaxctly. I've had a lot of this directly from the scientists and military personnel in the now disbanded STAR GATE program, and it's hair-raising stuff. (And I say that as a largely bald guy.) An enormous amount about the process is now known, including a lot of the problems associated with it. As far as I can model it, the process is something like allowing the mind to meander through one's stockpile of images, some of which are then highlighted and reshaped by whatever this capacity is (I have no idea what its vector could be). It seems very visual and haptic, in that viewers sketch and report images, homing in on gestalts of the scene they are trying to apprehend. Sometimes their own evaluation of what they've "seen" turns out to be wrong, yet key elements of their drawings and reports stand out enough for blinded judges to identify which of four or five possible targets most closely matches the reported data. None of this is especially surprising (except the fact that it happens at all) because a lot of psychology experiments have shown that this is how memory works too, by construction rather than xerox copying. ("So why aren't they rich? Why was STAR GATE shut down?" Because psi is unreliable and skittish. If I can write award-winning fiction, why aren't I rich? Beats the hell out of me, it's just so wrong. And, you know, the Manhattan Project wasn't exactly discussed every day in the New York Times during WW2. For most of its tenure STAR GATE was highly classified, funded for some 20 years with annual reviews by high-level assessors; when some leaks started, and the story looked about to come out, the government probably had no option but to kill it publicly, and salt the corpse with ridicule. --But we know the government and military of this great nation would never do anything like that, don't we, children?) Damien Broderick From scerir at libero.it Sun Jan 24 22:15:53 2010 From: scerir at libero.it (scerir) Date: Sun, 24 Jan 2010 23:15:53 +0100 (CET) Subject: [ExI] Broken Time Translation Symmetry and State Reduction Message-ID: <4248198.209161264371353872.JavaMail.defaultUser@defaultHost> This might interest some here (Serafino?): Broken Time Translation Symmetry as a model for Quantum State Reduction Authors: Jasper van Wezel MIght interest, yes. But the field of quantum theory is a sort of Babylonia. Let us reread Fuchs and Peres (Physics Today, 2000): "Contrary to those desires, quantum theory does not describe physical reality. What it does is provide an algorithm for computing probabilities for the macroscopic events ("detector clicks") that are the consequences of our experimental interventions. This strict definition of the scope of quantum theory is the only interpretation ever needed, whether by experimenters or theorists." Now, if QT does *not* describe *reality*, or *physical" reality, it makes little sense to try to transform QT into an ontological theory. Mainly because its Procustean formalism does not describe physical reality, the ontological reality, the standing- alone reality. I had the chance, in the '70s, of attending a conference in which one of the two (or maybe three) guys who created the quantum formalism made, more or less, the same point. From lacertilian at gmail.com Fri Jan 22 16:53:49 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 22 Jan 2010 08:53:49 -0800 Subject: [ExI] thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <816ABDBA8A5A4F3AB07B7525C63D4A95@spike> <8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com> <8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> Message-ID: This is very exciting news! I had been wondering about that for years, and never thought to look it up specifically: are we hardwired to feel ourselves as two-armed, two-legged, one-headed creatures? Or do we only learn to do so because that is the body we find ourselves in? If the latter is true, it should be possible to learn entirely new body parts, which appears to be exactly what you were doing. Brain plasticity is the best thing that's ever evolved. Too bad it isn't so easy to unlearn your body parts, as you got an inkling of and amputees have to live with. If only the brain had a little less space, maybe it would be more willing to delete some files. Oh well. Keep up the good work Alex. From eschatoon at gmail.com Sun Jan 24 22:28:33 2010 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Sun, 24 Jan 2010 23:28:33 +0100 Subject: [ExI] thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <816ABDBA8A5A4F3AB07B7525C63D4A95@spike> <8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com> <8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> Message-ID: <1fa8c3b91001241428md7093dl7e62e2beb261d4f6@mail.gmail.com> This is very interesting. Can you say more? 2010/1/22 : > The whole affair is very strange as it is first hand (LOL) experience > relating to many theoretical subjects discussed here. To me the question of > copy vs original seems slightly altered by this experience. -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From pharos at gmail.com Sun Jan 24 22:28:43 2010 From: pharos at gmail.com (BillK) Date: Sun, 24 Jan 2010 22:28:43 +0000 Subject: [ExI] Gmail has ads!!! In-Reply-To: <620588.10605.qm@web113618.mail.gq1.yahoo.com> References: <620588.10605.qm@web113618.mail.gq1.yahoo.com> Message-ID: On 1/24/10, Ben Zaiboc wrote: > Yes, browsing is an entirely different (and intensely irritating) experience > if you can't use Firefox, or some similar Adblock-capable browser. > > Works on Internet Explorer and uses the EasyList filters from AdBlock Plus. Bad news is that it (to-date) doesn't stop the gmail ads, but is OK for cleaning-up general browsing. BillK From spike66 at att.net Sun Jan 24 22:02:32 2010 From: spike66 at att.net (spike) Date: Sun, 24 Jan 2010 14:02:32 -0800 Subject: [ExI] what? what? In-Reply-To: <4B5CAD18.70206@satx.rr.com> References: <1580834.164521264283143313.JavaMail.defaultUser@defaultHost><589142.44940.qm@web65611.mail.ac4.yahoo.com> <4B305952E3444B348E69ADEB3F2A7FB6@spike> <4B5CAD18.70206@satx.rr.com> Message-ID: <3A4F202E7AE748F19875FE0171A17EEA@spike> > ...On Behalf Of Damien Broderick > ... > She? > > Sun Sign: Capricorn. Chinese Sign: Metal Rooster. Location: > Albuquerque, New Mexico> Good thanks, Damien. Regarding she? that was intentional, for I am hoping to make the female gender terms more general than the male as opposed to how it is now, the other way around. It stands to reason, since the term she contains he, and the term her also contains he, so he can be a specific subset, and the female gender terms become nonspecific. Wouldn't that solve a bunch of problems? Ladies here, would you have any objections to making all unknown persons she and her? spike From scerir at libero.it Sun Jan 24 22:47:28 2010 From: scerir at libero.it (scerir) Date: Sun, 24 Jan 2010 23:47:28 +0100 (CET) Subject: [ExI] heaves a long broken psi Message-ID: <18064304.196221264373248708.JavaMail.defaultUser@defaultHost> > [s.] Very close again. But there is another path, to be explored. [...] There are many more paths, of course. In example another one would start from a very simple step. In GR physical objects are localized in space and time only with respect to each other. If we move consistently all the objects in spacetime at once, we are not generating a new general relativistic state, but only an equivalent mathematical description of the same state. According to Rovelli and the above concept of 'diffeomorphism invariance', spacetime and space-time dynamics, are defined by the objects themselves. Now, if apply the above simple concept to the 'strange' behaviour of the entangled particles, it follows that their behaviour looks 'strange' in the framework of the ordinary space-time made of massive bodies, but their behaviour would not be so 'strange' in the framework of their own space-time, only made of entangled particles. Night thought for Damien, Jeff, Stuart: imagine a universe made of two entangled particles. From gts_2000 at yahoo.com Sun Jan 24 22:50:24 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 24 Jan 2010 14:50:24 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) Message-ID: <754434.70615.qm@web36505.mail.mud.yahoo.com> --- On Sat, 1/23/10, Stathis Papaioannou wrote: >> Digital simulations of non-digital objects never equal >> the things they simulate, except that some people here like >> to imagine so. > > It is true that a digital simulation is not the same as the > original, but the question is whether it performs the same function > as the original. A simulated apple could taste, feel, smell like a > real apple to a person with a lot of extra equipment which I'm sure > computer game developers are working on Game developers create many kinds of illusions. Here we concern ourselves with reality. > A simulated brain will not be identical to a real brain but > you seem to agree that it could display the same behaviour as a real > brain if we added appropriate sensory organs and effectors. > However, you make the claim that although every other function of the > brain could be reproduced by the simulated brain the consciousness can > never be reproduced. Never by a s/h system, but such a system could perform all the visible functions of a conscious brain. We can in principle create weak AI, defined as software running on hardware that exhibits all the behaviors of a natural brain. By behavior I mean physical behavior of the apparatus to which it is attached including its outputs. We can, in other words, create unconscious robots with weak AI that pass the turing test. > But if that were so, it would allow for the possibility > that you are a zombie and don't realise it, which you agree > is absurd. > Therefore, you MUST agree that it is impossible to > reproduce all the functions of the brain without also reproducing > consciousness. But I don't. > It is still open to you to claim that a computer could never > reproduce human intelligence (and therefore never reproduce human > consciousness); We can create the outward appearance of human intelligence. -gts From spike66 at att.net Sun Jan 24 22:53:57 2010 From: spike66 at att.net (spike) Date: Sun, 24 Jan 2010 14:53:57 -0800 Subject: [ExI] heaves a long broken psi In-Reply-To: <4B5CBD79.1090309@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <9711229417674894B95766237C126A85@spike> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <580930c21001241301j59fd045bn9bd6454833b6867@mail.gmail.com> <4B5CBD79.1090309@satx.rr.com> Message-ID: > ...On Behalf Of Damien Broderick > ...If I can write award-winning fiction, why aren't I rich? Beats the hell out > of me, it's just so wrong... Damien Broderick You answered it yourself Damien: you write award-winning fiction, not money-winning fiction. L. Ron Hubbard wrote money-winning fiction. If I rubbed a lamp and a genie offered me the choice, to be L. Ron Hubbard or Damien Broderick, the choice would be easy: there aint never been the pile of money deep enough for me to write that kind of money-winning fiction. And this is ME talking, I loooove money, love that stuff, in jaw dropping quantities, the more the merrier assuming it belongs to me etc. Damien, do continue forever your award-winning writing and be poor like the rest of us, ja? spike From gts_2000 at yahoo.com Sun Jan 24 23:31:23 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 24 Jan 2010 15:31:23 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <20100124055227.5.qmail@syzygy.com> Message-ID: <481253.54115.qm@web36503.mail.mud.yahoo.com> --- On Sun, 1/24/10, Eric Messick wrote: >> Sure looks like religion to me! > > Care to explain how? Did you read my post of yesterday in which I wrote a story about a botanist who wrote a book describing every possible fact about a certain apple? A programmer translated that book from the English language into the C++ language (or into whatever his programming language of choice, mine is C++) to create a perfect digital simulation of the apple. The moral of the story: the programmer translated a book that describes an apple into a digital simulation that also describes the apple, illustrating that simulations of non-digital objects amount to mere *descriptions* of objects -- just books about objects. *And books about objects do not equal the objects they're about.* To say then that a digital description of a person can actually eat and taste a description of an apple strikes me as an absurd sort of religious statement. > Is it: > ? Syntax can never produce semantics. Yes. > or: > ? Software can never be part of a mind. No. Our minds might run programs but they must also do something more to explain the facts, namely that stubborn fact of conscious understanding of symbols (semantics). > or: > ? Mind can never be simulated. We can simulate mind, but a simulation of it will not equal a real brain/mind. To simulate mind is to create weak AI. > or: > ? Consciousness is not a computational process. We can compute the brain. But because the brain does not exist as a digital object, i.e., because it does not equal (merely) a digital computer running software, we cannot make digital copies of it. Only actual copies of it (e.g. physical clones) will have consciousness. We can make digital simulations but not digital copies of the brain, and simulations of non-digital objects including brains lose the real properties of the originals; they amount to mere descriptions of the originals. As descriptions, they lose their first-person ontology. -gts From gts_2000 at yahoo.com Mon Jan 25 00:38:28 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 24 Jan 2010 16:38:28 -0800 (PST) Subject: [ExI] Brains, Computers, AI and Uploading In-Reply-To: <580930c21001241006q3d5ed6ddufc0d1ff0ba7128bb@mail.gmail.com> Message-ID: <297853.24175.qm@web36505.mail.mud.yahoo.com> --- On Sun, 1/24/10, Stefano Vaj wrote: > ... the fact, simply obscured by a few centuries of philosophical > dualism, that "consciousness" has no really different ontological > status, from, say, digestion... Right, Stefano, except that I think you really mean to say that consciousness has no really different epistemic status from digestion, not that it has no different ontological status. I think we can and should consider consciousness reducible epistemically to its neurological substrate, and in this respect we should consider it no different from digestion. When we come to know every possible fact about the brain and nervous system, we will then know everything we can or need know about consciousness. This epistemic reducibility does not however imply ontological reducibility. Unlike most things which have a reducible third-person ontology (e.g., mountains, planets, digestive systems) consciousness has an irreducible first-person ontology. It exists in the same one world as those things with third-person ontologies and differs from those things only that one respect. The difference means nothing in particular, though it has caused an enormous amount of philosophical confusion over the centuries. -gts From ablainey at aol.com Mon Jan 25 01:15:43 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Sun, 24 Jan 2010 20:15:43 -0500 Subject: [ExI] psi in Nature In-Reply-To: <31811703.158651264286672082.JavaMail.defaultUser@defaultHost> References: <31811703.158651264286672082.JavaMail.defaultUser@defaultHost> Message-ID: <8CC6B730CE1E410-58B8-49C1@webmail-m078.sysops.aol.com> I'd like to think the brain is not an entirely chaotic object. However some people would make me question that! Joking aside. To some degree, yes the brain must be a chaotic system or where would random thoughts come from? At the meso scale of nuerons; I can imagine that intelligence and problem solving could (probably must) be a chaotically driven processes. Where random firings generate possible solutions until something fits the question. Roughly comparable to trying random jigsaw pieces until you get a fit. However in the thought process that fit may not and often isn't perfect, but as long as it passes a reasonable threshold level; it is accepted. If I am right in thinking that the quantum signature of a chaotic system is not nesesarily dependant on quantum entaglement (especially with a remote system as I am suggesting in psi?). If so It may not be proof of atomic entaglement in the system and therefor neither proves or disproves psi. Then again the Q.S may well be an indicator onw way or the other. To be honest I don't know enough about quantum sig's and trying to understand chaotic systems really frustrates me.....they are just too Random! -----Original Message----- From: scerir Sent: Sat, 23 Jan 2010 22:44 Alex: The level of psi ability would therefor be dependant on the quantity of entagled atoms in each individuals brain. # Is the brain a chaotic object? I mean, is a neural net something chaotic, at least partially? I really do not know anything about that. But. It is known that chaos - at a macroscopic level, at a mesoscopic level - has a quantum signature ('signature', non necessarily 'cause') and this signature is the quantum entanglement of the quantum systems 'immersed' in the chaotic regime. -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Mon Jan 25 01:29:23 2010 From: femmechakra at yahoo.ca (Anna Taylor) Date: Sun, 24 Jan 2010 17:29:23 -0800 (PST) Subject: [ExI] Paracrap again and it's long:) (was heaves a long broken psi) In-Reply-To: Message-ID: <185434.6520.qm@web110407.mail.gq1.yahoo.com> --- On Sun, 1/24/10, BillK wrote: > The point about humans guessing at random is that the human brain doesn't do 'random'. The brain is always looking for patterns, even where none exist. I am curious about this as I believed that Psi is much related to patterns. When you say none exist and I know this might sound crazy but could you give me an example. I know in our subconscious thoughts we do things everyday such as scratch our noses and yet these thoughts don't stay integrated into our overall consciousness. We just don't think about them. We think of memories, tasks, feelings and/ideas. When I mean memories I mean everything that our conscious has chosen to remember. Each individual is a sphere of memories. Not any other human being can see, hear, taste, touch or experience the consciousness of another. They may have similar thoughts and ideas but they can never experience it perfectly in the exact same fashion as the other. This is what makes us unique individuals. Much like scratching our noses maybe the brain is constantly searching for patters that maybe subconsciously we are not even aware. > In psi tests the brain is continuously making up stories, like the ball 'must' land on red next, or tails is expected now, or the next symbol must be a star. What if the brain is doing the calculations without awareness? What if extra-perception is merely a means at becoming aware of the patterns? The other day someone mentioned that they had an awareness that someone else might have been in an accident. How could this feeling possibly have travelled to one location to another? I really have no idea, maybe static? Who knows. Whatever the case may be, the memory she experienced was subconscious. Whether real or not, the thought popped into her mind and she was quick to re-call it when the topic became available. Doesn't anybody find that amazing? Now I know the John Clarks;) will say, "but how do you know she didn't imagine it or she is lying?", well you don't. You can only imagine. It's much like doing math. You weigh the matter. One possibility is that she showed up on the Extropy Chat by chance and because she saw what was being discussed she openly gave her memory of thought on that subject. Otherwise she is either out for attention or lying. Highly unlikely to me as she mentioned that she was in the Science Realm and who wants to be a called "a kook":). She went out of her way to mention something that was exceptional to her even with fear of ridicule. When you get to know people very well you take into account many factors. I think people have extra-perception in many areas. > If you give the brain a random list, it is most unlikely that the human will guess correctly at the expected chance (random)level. Either because the test was too short. Because the expected chance level is only achieved over long durations tests to remove random fluctuations. Or the human was making up patterns of guesses (it has to - that's the way it works) and the patterns don't match a randomized list. They will be better or worse. Exactly my point. She was either random and crazy or she was telling the truth. That's why it's so hard to do Psi testing, it takes those slight moments of subconscious thoughts to weigh the matter as opposed to random testing. Anyway back to the brain calculations without awareness. What if the patterns are already embedded? Could genes hold these patterns? If genes can re-call our eye color or birth defects why wouldn't they pass on symbols or metrics? So many thoughts and questions from such a small paragraph..thanks Bill..you gave me some thoughts to ponder. Apparently I have too much time on my hands:) __________________________________________________________________ Looking for the perfect gift? Give the gift of Flickr! http://www.flickr.com/gift/ From gts_2000 at yahoo.com Mon Jan 25 02:17:27 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 24 Jan 2010 18:17:27 -0800 (PST) Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: Message-ID: <103687.50323.qm@web36502.mail.mud.yahoo.com> --- On Sat, 1/23/10, Stathis Papaioannou wrote: >> My assertion leads simply to a philosophy of mind in >> which the brain attaches meanings to symbols in some way >> that we do not yet fully understand. Nothing more. > > But this is unnecessary, at best. You could say we do > understand how meaning is attached to symbols when they are finally > attached to an environmental input. Only if you really *want* the brain > to remain mysterious would you add the superfluous magical layer. I have no desire for the brain to remain mysterious but we should admit the fact of our own ignorance: nobody in 2010 understands how the brain becomes conscious, much less how it has conscious experiences of understanding the meanings of symbols. 30 years ago computers seem so groovy that some idealists hoped they would would explain the Mind along with Life, The Universe and Everything. I believe now that they were wrong. First came the abacus, then came the digital computer. Technologically, the computer seems completely different from the abacus. But philosophically speaking they do not differ. >> Looks to me like the world is comprised of just one >> kind of stuff. Some configurations of that one stuff have >> conscious understanding of symbols. Most if not all other >> configurations of that stuff do not. > > Yes, but the claim that it is impossible for matter other > than that in brains to produce consciousness is irrational. You should know by now that I don't make that claim. I claim that matter configured in the form of digital computers won't produce consciousness, and that natural selection configured organic brains in a manner different from how we have configured digital computers. -gts From steinberg.will at gmail.com Mon Jan 25 02:48:29 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 24 Jan 2010 21:48:29 -0500 Subject: [ExI] psi in Nature In-Reply-To: <8CC6B730CE1E410-58B8-49C1@webmail-m078.sysops.aol.com> References: <31811703.158651264286672082.JavaMail.defaultUser@defaultHost> <8CC6B730CE1E410-58B8-49C1@webmail-m078.sysops.aol.com> Message-ID: <4e3a29501001241848m49e4d41eu1888257da35347ac@mail.gmail.com> > > From this it isnt much of a jump to imagine that atoms in the brain of one > idividual are state locked with atoms in another. With this it is not beyond > the realms of possibility that the brain can influence the state of the > atoms to broadcast information. > All entagled atoms would change state accordingly thus passing the > information to their host brain. > > The level of psi ability would therefor be dependant on the quantity of > entagled atoms in each individuals brain. The brain can't broadcast information through entanglement because manipulation would ruin the entanglement. But perhaps, decision mechanisms rooted in chance, this entanglement acts as a cause for a chaotic system leading to a guess, so that two people seem to be communicating. Having larger numbers of entangled particles between the two increases the amount of information and thus saves more from chaotic, untraceable decay. By finding slight similarities to Lorenz attractors, the brain can deduce ideas of the root of chaotic systems, again reducing decay. It is feasible to imagine the brain reconstructing even specific images and ideas given the amount of decay exponentially reduced upon introduction of new pairs, describing in one fell swoop telepathy, empathy, precognition (sort of,) remote view (also sort of,) and synchronicity. Fluctuations in sociological information cause interesting and coincidental effects physically and chronologically far from each other, and the same goes for biology, cosmology, etc. Extension to an "entanglement net" that could certainly exist, at least theoretically, does not seem ludicrous. Even if psi turns out not to be real, faster-than-light communication will be needed at some point anyway, so why not theorize? -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Mon Jan 25 04:44:48 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 25 Jan 2010 15:14:48 +1030 Subject: [ExI] Gmail has ads!!! In-Reply-To: References: <620588.10605.qm@web113618.mail.gq1.yahoo.com> Message-ID: <710b78fc1001242044h1640634fpc7af972ead7cc297@mail.gmail.com> 2010/1/25 BillK : > On 1/24/10, Ben Zaiboc wrote: >> ?Yes, browsing is an entirely different (and intensely irritating) experience >> if you can't use Firefox, or some similar Adblock-capable browser. >> >> > > > > Works on Internet Explorer and uses the EasyList filters from AdBlock Plus. > > Bad news is that it (to-date) doesn't stop the gmail ads, but is OK > for cleaning-up general browsing. > > BillK I use AdBlock Plus, so don't usually see ads, but sometimes I use other browsers, or have ABP turned off for other reasons. Interestingly (to me), I occasionally see items of interest in the gmail ads, and click them. That's pretty abnormal for me; usually I'd never click internet ads, especially crap like banner ads and such. -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From stathisp at gmail.com Mon Jan 25 05:24:23 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 25 Jan 2010 16:24:23 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <754434.70615.qm@web36505.mail.mud.yahoo.com> References: <754434.70615.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/25 Gordon Swobe : > --- On Sat, 1/23/10, Stathis Papaioannou wrote: > >>> Digital simulations of non-digital objects never equal >>> the things they simulate, except that some people here like >>> to imagine so. >> >> It is true that a digital simulation is not the same as the >> original, but the question is whether it performs the same function >> as the original. A simulated apple could taste, feel, smell like a >> real apple to a person with a lot of extra equipment which I'm sure >> computer game developers are working on > > Game developers create many kinds of illusions. Here we concern ourselves with reality. We agree that we are not reproducing the actual apple or brain, but an engineered simulacrum. The question is, will this simulacrum have the properties of the original? In cataract surgery the lens in the eye is removed and replaced with a synthetic lens, which is definitely not the same thing as the biological original, but is just as good functionally (indeed, better since the patient had a cataract). >> A simulated brain will not be identical to a real brain but >> you seem to agree that it could display the same behaviour as a real >> brain if we added appropriate sensory organs and effectors. >> However, you make the claim that although every other function of the >> brain could be reproduced by the simulated brain the consciousness can >> never be reproduced. > > Never by a s/h system, but such a system could perform all the visible functions of a conscious brain. We can in principle create weak AI, defined as software running on hardware that exhibits all the behaviors of a natural brain. By behavior I mean physical behavior of the apparatus to which it is attached including its outputs. We can, in other words, create unconscious robots with weak AI that pass the turing test. This is you belief but you don't provide a valid supporting reason. The symbol grounding problem according to the very article you cited is not a problem for a s/h system. What you claim is that whether the symbols are grounded or not they won't have "meaning", but you don't explain why symbol grounding is not sufficient for "meaning". You can't explain "meaning" at all other than a mysterious thing that you have and computers which appear to have it don't. >> But if that were so, it would allow for the possibility >> that you are a zombie and don't realise it, which you agree >> is absurd. >> Therefore, you MUST agree that it is impossible to >> reproduce all the functions of the brain without also reproducing >> consciousness. > > But I don't. > >> It is still open to you to claim that a computer could never >> reproduce human intelligence (and therefore never reproduce human >> consciousness); > > We can create the outward appearance of human intelligence. Therefore, we can selectively remove any aspect of a human's consciousness without them or anyone else realising it. You've said you don't like the implications of this statement but you haven't shown how it is false. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 25 06:52:58 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 25 Jan 2010 17:52:58 +1100 Subject: [ExI] digital simulations, descriptions and copies In-Reply-To: <103687.50323.qm@web36502.mail.mud.yahoo.com> References: <103687.50323.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/25 Gordon Swobe : >> Yes, but the claim that it is impossible for matter other >> than that in brains to produce consciousness is irrational. > > You should know by now that I don't make that claim. I claim that matter configured in the form of digital computers won't produce consciousness, and that natural selection configured organic brains in a manner different from how we have configured digital computers. OK, I should have said that you claim it is impossible for matter configured in the form of a digital computer to produce consciousness, even though you have no reason to say this. Your assertion that syntax cannot produce meaning, which (though wrong) does purport to be a logical proof that programs can't think, does not prevent the matter implementing the programs from thinking. After all, the neurons in the brain do engage in signal processing, something like artificial neural nets, and you claim that it is not this which leads to the conscious, but the intrinsic activity of the neuron. So even by your own lights, it should be possible to say that the computer behaving intelligently is conscious not because of the software it is running, but because of the electrical activity in its circuits. That you won't even consider this possibility is due to a prejudice you have, and not due to any argument (even one that turns out to be wrong) that you have presented. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 25 10:14:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 25 Jan 2010 21:14:27 +1100 Subject: [ExI] Brains, Computers, AI and Uploading In-Reply-To: <297853.24175.qm@web36505.mail.mud.yahoo.com> References: <580930c21001241006q3d5ed6ddufc0d1ff0ba7128bb@mail.gmail.com> <297853.24175.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/25 Gordon Swobe : > I think we can and should consider consciousness reducible epistemically to its neurological substrate, and in this respect we should consider it no different from digestion. When we come to know every possible fact about the brain and nervous system, we will then know everything we can or need know about consciousness. This epistemic reducibility does not however imply ontological reducibility. Well, it might. If we know enough about digestion we might be able to digest food by artificial means. We haven't made a gut, but we've carried out digestion. Ultimately, we might make an artificial gut with programmed nanomachinery that can be installed in patients with inflammatory bowel disease, for example. Again, it would not be actual biological gut, but as long as it worked properly no-one would say that it didn't really digest food but only pretended to do so. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Jan 25 13:11:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 25 Jan 2010 05:11:34 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <559425.24455.qm@web36506.mail.mud.yahoo.com> --- On Mon, 1/25/10, Stathis Papaioannou wrote: > This is you belief but you don't provide a valid supporting > reason. The symbol grounding problem according to the very article > you cited is not a problem for a s/h system. It certainly is a problem once you understand the syntax-semantics problem. You just don't take it seriously or don't understand it. Do you believe your desktop or laptop computer has conscious understanding of the words you type? -gts From stefano.vaj at gmail.com Mon Jan 25 13:56:27 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 25 Jan 2010 14:56:27 +0100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <559425.24455.qm@web36506.mail.mud.yahoo.com> References: <559425.24455.qm@web36506.mail.mud.yahoo.com> Message-ID: <580930c21001250556o41321d1br946a63362cb6405c@mail.gmail.com> 2010/1/25 Gordon Swobe : > Do you believe your desktop or laptop computer has conscious understanding of the words you type? What about unconscious understanding, since conscious does not seem to mean anything specific in the context of this discussion? -- Stefano Vaj From stathisp at gmail.com Mon Jan 25 15:19:59 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 26 Jan 2010 02:19:59 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <559425.24455.qm@web36506.mail.mud.yahoo.com> References: <559425.24455.qm@web36506.mail.mud.yahoo.com> Message-ID: On 26 January 2010 00:11, Gordon Swobe wrote: > It certainly is a problem once you understand the syntax-semantics problem. You just don't take it seriously or don't understand it. You are saying that in addition to the symbol grounding problem there the problem of attaching "meaning" to the symbols. You can't explain what this meaning is but you feel that humans have it and computers don't. No empirical test can ever convince you that computers have it, because by definition there is no empirical test for it. Apparently no analytic argument can convince you either. > Do you believe your desktop or laptop computer has conscious understanding of the words you type? No. I don't believe animals understand everything humans do either, even though mammalian brains are structurally all very similar. But if the computer was able to have a convincing conversation with me (not a trick like ELIZA) then then I would have to consider that it may understand the words. Furthermore, if the computer was based on reverse engineering a human brain then I would say it has to have the same consciousness as a human. As I have explained several times, I am led to the latter conclusion from the absurdity that results from assuming it false, rather in the way you can prove that sqrt(2) is irrational by assuming that it is rational and showing that the assumption leads to a contradiction. It's frustrating that you probably understand this but choose simply to dismiss it. -- Stathis Papaioannou From jonkc at bellsouth.net Mon Jan 25 15:29:03 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 25 Jan 2010 10:29:03 -0500 Subject: [ExI] heaves a long broken psi. In-Reply-To: <4B5CA6F4.5040506@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> Message-ID: On Jan 24, 2010, Damien Broderick wrote: > Do you really suppose that parapsychologists with PhDs and professorships in statistics or physics have never noticed the elementary facts you're pointing at? Yep I do. Far better scientists than them (nobody would go into parapsychology if they could make it in any other field) have been fooled by statistics and found things that weren't there, and this was in looking at GOOD data sets. > The whole point of using statistics in science is that with sufficient data points, random correlations can be disambiguated from the non-random. I can't emphasize enough that it is pointless to use sophisticated mathematics to analyze data to find patterns if you have absolutely no reason to think that the data you're looking at is any good. > I've had a lot of this [anecdotal stories] directly from the scientists and military personnel in the now disbanded STAR GATE program, and it's hair-raising stuff. Physicist Richard Feynman had an interesting psi story. One day when he was a teenager in college and away from home for the first time he suddenly got an overwhelming feeling that his grandmother had died. The feeling was so strong that he made a telephone call home, and long distance calls were rare and expensive then. It turns out that grandma was just fine. Of course Feynman was an odd man, most people don't like to tell those type of psi stories, they prefer the other kind. > "So why aren't they rich? Why was STAR GATE shut down?" Because psi is unreliable and skittish. We are supposed to believe that psi is too unreliable and skittish for the scientific method to detect, and this grotesque situation has been going on for century after century; but we are also supposed to believe that psi is not too unreliable and skittish for third rate mystics and first rate charlatans to discover it. And if you believe that then there's this bridge I'd like to sell you. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Mon Jan 25 15:57:34 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 25 Jan 2010 07:57:34 -0800 (PST) Subject: [ExI] what? what? In-Reply-To: Message-ID: <633339.57141.qm@web113601.mail.gq1.yahoo.com> "spike" asked: > Ladies here, would you have any objections to making > all unknown persons she and her? On the other hand: Males here, would you object to being referred to as 'she' when somebody doesn't know (or doesn't care about) your gender? > I am hoping to make the female gender terms more general than > the male as opposed to how it is now, the other way around. Why? Just to overturn an established convention that works perfectly well? In the absence of a proper, gender-neutral term, I fail to see why we should make such an effort to change something that works quite well already simply to satisfy what seems to me some vague (and mistaken, imo) notion of political correctness. What does it gain? What does it cost? > It stands to reason, since the term she contains he... You could equally argue that 'Man' is the root, as in 'mankind', and it's therefore more appropriate for the default to be 'he'. I think that's a better argument. Rather than focus on the divisive aspects of the language, why aren't we focusing on the commonalities? There's nothing wrong with a woman being a spokesman or a salesman, they are after all members of the race of mankind. It just seems contrived and unnecessary to me. And makes me wonder where it might lead. Will we be encouraged to refer to womanagers? Will officers on ships have to woman their posts? Should we call Bruce Willis an actress? Ben Zaiboc, Tongue only partly in cheek From jonkc at bellsouth.net Mon Jan 25 16:03:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 25 Jan 2010 11:03:33 -0500 Subject: [ExI] Frozen to minus 100F and then walks away In-Reply-To: <580930c21001250556o41321d1br946a63362cb6405c@mail.gmail.com> References: <559425.24455.qm@web36506.mail.mud.yahoo.com> <580930c21001250556o41321d1br946a63362cb6405c@mail.gmail.com> Message-ID: <33C783D3-41F1-441B-8B01-44783A3A6BE8@bellsouth.net> The Alaskan Upis beetle freezes at 18 degrees F, but if you continue to cool it down all the way down to minus 100 degrees F (-73C) and then warm it up again the beetle wakes up and walks away apparently unharmed. The insect accomplishes this miracle not with an antifreeze protein as Arctic fish do but with a sugar-fatty acid complex called "xylomannan". http://www.nytimes.com/2010/01/19/science/19creatures.html John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 25 16:38:01 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 25 Jan 2010 10:38:01 -0600 Subject: [ExI] heaves a long broken psi. In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> Message-ID: <4B5DC8E9.3060609@satx.rr.com> On 1/25/2010 9:29 AM, John Clark wrote: > We are supposed to believe that psi is too unreliable and skittish for > the scientific method to detect, No we're not. >but we are also supposed to believe > that psi is not too unreliable and skittish for third rate mystics and > first rate charlatans to discover it. Obviously it's not too unreliable to be discovered (and mimicked by charlatans who use tricky instead) because it's been discovered. It is clearly still too unreliable to form the primary basis of military or industrial decisions, although it is reliable enough to have been a valuable ancillary input into intelligence work for decades in both the USA and the Soviet Union. Do some research. Damien Broderick From bbenzai at yahoo.com Mon Jan 25 16:12:00 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 25 Jan 2010 08:12:00 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <13377.22829.qm@web113613.mail.gq1.yahoo.com> http://xkcd.com/386/ From stefano.vaj at gmail.com Mon Jan 25 16:59:32 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 25 Jan 2010 17:59:32 +0100 Subject: [ExI] what? what? In-Reply-To: <633339.57141.qm@web113601.mail.gq1.yahoo.com> References: <633339.57141.qm@web113601.mail.gq1.yahoo.com> Message-ID: <580930c21001250859k4d55742ao6329a97374f0655a@mail.gmail.com> 2010/1/25 Ben Zaiboc : > Why? ?Just to overturn an established convention that works perfectly well? ?In the absence of a proper, gender-neutral term, I fail to see why we should make such an effort to change something that works quite well already simply to satisfy what seems to me some vague (and mistaken, imo) notion of political correctness. ?What does it gain? ?What does it cost? I got used with some pain to the "his or her", "him or her" current-day politically-correct English usage when referring to somebody of an undefined subject. The trouble is also increased by the fact that "one, one's" quickly become awkward in long sentences. In Italian I also suspect that if we had such words "his or her" would not be too polite, since ladies should come first... :-) -- Stefano Vaj From max at maxmore.com Mon Jan 25 16:42:37 2010 From: max at maxmore.com (Max More) Date: Mon, 25 Jan 2010 10:42:37 -0600 Subject: [ExI] what? what? Message-ID: <201001251709.o0PH9SNS020249@andromeda.ziaspace.com> >Should we call Bruce Willis an actress? I think not -- Willis is completely lacking in tresses. Max From ablainey at aol.com Mon Jan 25 17:45:46 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Mon, 25 Jan 2010 12:45:46 -0500 Subject: [ExI] thought controled Third arm, (was EPOC EEG headset) In-Reply-To: References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><816ABDBA8A5A4F3AB07B7525C63D4A95@spike> <8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com><8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> Message-ID: <8CC6BFD5BE9A66E-2194-A09@webmail-m011.sysops.aol.com> Likewise I have always wondered about it. I have done the thought experiments considering what would I feel like if I lost an arm, leg, was only a head in a jar etc. I also tried to think what about if I replaced a body part with a donor part or something artificial. However I never considered that gaining an artificial part could cause me to accept it as part of the body. I attribute this purely down to the fact that I was in direct mental control of the arm. It isn't the same as operating a JCB or other equipment. Somehow operating machinary by hand Abstacts it and stops you seeing it a spart of the body. Learning to reliably control it was only in the order of a few hours (for 3 axis) and it started to become subconcious very quickly. To begin with I must have been in a similar position to a baby discovering its limbs. Now I firmly believe that brain plasticity will allow all kinds of artificial limb attachments and body modifications. It does make me wonder whether the brain would develope in some way to accomodate, or just overwrite existing regions. More than likely the later. So it could be a big trade off depending what you loose. -----Original Message----- From: Spencer Campbell To: ExI chat list Sent: Fri, 22 Jan 2010 16:53 Subject: Re: [ExI] thought controled Third arm, (was EPOC EEG headset) This is very exciting news! I had been wondering about that for years, and never thought to look it up specifically: are we hardwired to feel ourselves as two-armed, two-legged, one-headed creatures? Or do we only learn to do so because that is the body we find ourselves in? If the latter is true, it should be possible to learn entirely new body parts, which appears to be exactly what you were doing. Brain plasticity is the best thing that's ever evolved. Too bad it isn't so easy to unlearn your body parts, as you got an inkling of and amputees have to live with. If only the brain had a little less space, maybe it would be more willing to delete some files. Oh well. Keep up the good work Alex. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 25 18:43:38 2010 From: spike66 at att.net (spike) Date: Mon, 25 Jan 2010 10:43:38 -0800 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <8CC6BFD5BE9A66E-2194-A09@webmail-m011.sysops.aol.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><816ABDBA8A5A4F3AB07B7525C63D4A95@spike><8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com><8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> <8CC6BFD5BE9A66E-2194-A09@webmail-m011.sysops.aol.com> Message-ID: <44FA78AEB074487E92B88EC3E311F9A6@spike> A comment here caused me to ask a completely unrelated question: in the age of software spell checkers and grammar checkers, what is the most common spelling error? It would need to be an error that spelled a different word, and the new word would need to make a grammatically correct sentence. Consider the example: >...So it could be a big trade off depending what you loose. I think this might be the most common modern spelling error, since the word loose, ordinarily an adjective, can also be a verb. So the above sentence would be grammatically correct. If the typo makes a new word and the new sentence makes grammatical sense, it reminds me of the hopeful mutant theory of evolution, where a genetic mistake creates a new characteristic, which in rare cases does not slay but is actually neutral or beneficial to the beast in which it occurs, and results in the change being propagated, perhaps eventually resulting in the creation of a new species. Another aside: a number of years ago, a friend received an anonymous threatening letter from a Co$er in which the word "than" was spelled "then" as in "I am bigger then you." It was a handwritten note, and the same error occurred 4 times. Later, an email was received in which the same error appeared twice. I suspected the same guy wrote both, but since this is such a common error, it wasn't actual proof, but ample grounds for suspicion. So what is the most common modern spelling error? spike From spike66 at att.net Mon Jan 25 18:22:12 2010 From: spike66 at att.net (spike) Date: Mon, 25 Jan 2010 10:22:12 -0800 Subject: [ExI] what? what? In-Reply-To: <580930c21001250859k4d55742ao6329a97374f0655a@mail.gmail.com> References: <633339.57141.qm@web113601.mail.gq1.yahoo.com> <580930c21001250859k4d55742ao6329a97374f0655a@mail.gmail.com> Message-ID: <05EA165D11E54E738F45406277DFB068@spike> > ...On Behalf Of Stefano Vaj > Subject: Re: [ExI] what? what? > > 2010/1/25 Ben Zaiboc : > > Why? ?Just to overturn an established convention that works > perfectly well? ... Stefano Vaj No. It would be so that we will cheerfully return to using exclusively him and he, or exclusively her and she, after being educated on why all the alternatives are probably worse. Using he or she, him or her, breaks up the cadence of a thought and messes up literary flow. Do let us agree to dispense with the him or her, use gender specific terms (either gender, interchangeably) in an inclusive manner and stop worrying about it being sexist. It really isn't! We have both genders in every profession now, including prostitution: http://www.foxnews.com/story/0,2933,583680,00.html So we get it now. It really isn't about being PC. I don't even like PC. Recall I am the guy or gal who recently suggested modifying John Lenin's Imagine to be about creating mind-reading sexbots and making a ton of money. You cats didn't even laugh at my silliness, dammit. {8-[ I don't know why; I cracked me up with that gag. {8^D Perhaps the late rocker would mildly disapprove? Let us no longer harm our language (or yours Stefano) with the him or her. Let us recognize that "him" and "her" means either him or her, and let it go at that. We hetero boys just need to be cool if we are referred to in the femine gender pronoun. The ladies have had to put up with it since forever. We can too. spike From eschatoon at gmail.com Mon Jan 25 18:58:10 2010 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Mon, 25 Jan 2010 19:58:10 +0100 Subject: [ExI] what? what? In-Reply-To: <05EA165D11E54E738F45406277DFB068@spike> References: <633339.57141.qm@web113601.mail.gq1.yahoo.com> <580930c21001250859k4d55742ao6329a97374f0655a@mail.gmail.com> <05EA165D11E54E738F45406277DFB068@spike> Message-ID: <1fa8c3b91001251058o80c57ewaea47d8c6f1e45e1@mail.gmail.com> I often use (s)he, too bad there is no similar trick cannot be used for "his" and "her". Ve, vis, ver (eg in Egan's novels) are good inventions but not really catching on. There is a trend to use the plural theirs (eg everyone should love their children). I can write PCese, but I think I will still refer to football players as he and to belly dancers as she: it is statistically true, too bad for PCness. G. On Mon, Jan 25, 2010 at 7:22 PM, spike wrote: > > >> ...On Behalf Of Stefano Vaj >> Subject: Re: [ExI] what? what? >> >> 2010/1/25 Ben Zaiboc : >> > Why? ?Just to overturn an established convention that works >> perfectly well? ... Stefano Vaj > > No. ?It would be so that we will cheerfully return to using exclusively him > and he, or exclusively her and she, after being educated on why all the > alternatives are probably worse. ?Using he or she, him or her, breaks up the > cadence of a thought and messes up literary flow. ?Do let us agree to > dispense with the him or her, use gender specific terms (either gender, > interchangeably) in an inclusive manner and stop worrying about it being > sexist. ?It really isn't! ?We have both genders in every profession now, > including prostitution: > > http://www.foxnews.com/story/0,2933,583680,00.html > > So we get it now. ?It really isn't about being PC. ?I don't even like PC. > Recall I am the guy or gal who recently suggested modifying John Lenin's > Imagine to be about creating mind-reading sexbots and making a ton of money. > You cats didn't even laugh at my silliness, dammit. ?{8-[ ?I don't know why; > I cracked me up with that gag. ?{8^D > > Perhaps the late rocker would mildly disapprove? > > Let us no longer harm our language (or yours Stefano) with the him or her. > Let us recognize that "him" and "her" means either him or her, and let it go > at that. ?We hetero boys just need to be cool if we are referred to in the > femine gender pronoun. ?The ladies have had to put up with it since forever. > We can too. > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From kanzure at gmail.com Mon Jan 25 19:30:15 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Mon, 25 Jan 2010 13:30:15 -0600 Subject: [ExI] Outlaw Biology? Public participation in the age of biology (UCLA; Jan 29/30) Message-ID: <55ad6af71001251130o300a3126scaa52ea455f069f@mail.gmail.com> The UCLA Center for Society and Genetics and Art/Sci present: Outlaw Biology? Public Participation in the Age of Big Bio 29-30 Jan. 2010 at the California NanoSystems Institute (map) Friday 4-8pm: Symposium Saturday 10am-3pm: Workshop and Exhibition more info: http://outlawbiology.net/bio-faire-exhibition/ """ Tenatative List of Workshops and Exhibitions 1. Bioweathermap, Jason Bobe. With field-trips to the UCLA Arboretum and Hammer Museum (in cooperation with Machine Project 2. Learn to Design a DNA-based nanostructure using cadnano software, Philip Lukeman 3. Paint colorful microbes ? luminescent, fluorescent, and pigmented ? on do-it-yourself solid media. With a little time and luck, we?ll preserve the painted results in epoxy, like microbiological paintings in amber, Mackenzie Cowell 4. SKDB: Learn to use software tools for open source manufacturing and bioengineering, Bryan Bishop and Ben Lipkowitz 5. Use of Acinetobacter calcoaceticus strain ADP1 as a DIY bioengineering platform, David Metzgar 6. Ars Synthetica: Have an informed, ethical, and open dialogue on the emerging field of synthetic biology, Gaymon Bennett 7. Extract DNA from Strawberries, CSG Staff 8. Lactobacillus Plasmid Recovery and Visualization for fun and profit, Meredith L. Patterson 9. DIY Webcam Microscopy. Join us for a worldwide webcam hacking event and make your own 100x USB microscope for less than $10. We?ll provide the webcams and a live internet feed from other workshop locations across the world, from Bangalore to Australia. Find out more at diybio.org/ucam 10. Velolab, See the first Bicyclized Mobile Biology lab, Sam Starr 11. Learn about FBI Outreach: Promoting Responsible Research & Career Opportunities, Special Agent Edward You 12. Learn about LavaAmp: The Personal Thermal Cycler, Guido N??ez-Mujica and Joseph P. Jackson III 13. The HOX Gene Zodiac project. Learn about homeobox genes, body plans and the Chinese Zodiac, Victoria Vesna """ (#4 is a lie- it will just be Ben presenting!) - Bryan http://heybryan.org/ 1 512 203 0507 From pharos at gmail.com Mon Jan 25 19:58:36 2010 From: pharos at gmail.com (BillK) Date: Mon, 25 Jan 2010 19:58:36 +0000 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <44FA78AEB074487E92B88EC3E311F9A6@spike> References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <816ABDBA8A5A4F3AB07B7525C63D4A95@spike> <8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com> <8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> <8CC6BFD5BE9A66E-2194-A09@webmail-m011.sysops.aol.com> <44FA78AEB074487E92B88EC3E311F9A6@spike> Message-ID: On 1/25/10, spike wrote: > A comment here caused me to ask a completely unrelated question: in the age > of software spell checkers and grammar checkers, what is the most common > spelling error? > > It would need to be an error that spelled a different word, and the new word > would need to make a grammatically correct sentence. Consider the example: > > >...So it could be a big trade off depending what you loose. > > I think this might be the most common modern spelling error, since the word > loose, ordinarily an adjective, can also be a verb. So the above sentence > would be grammatically correct. > > > So what is the most common modern spelling error? > > Surely it must be affect / effect? At least on the Exi list. ;) BillK From jonkc at bellsouth.net Mon Jan 25 20:12:35 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 25 Jan 2010 15:12:35 -0500 Subject: [ExI] heaves a long broken psi. In-Reply-To: <4B5DC8E9.3060609@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> Message-ID: On Jan 25, 2010, Damien Broderick wrote: > it is reliable enough to have been a valuable ancillary input into intelligence work for decades in both the USA and the Soviet Union. Valuable? I think not. True both governments have tried to use psi but governments have been known for doing lots of damn fool things. There has even recently been a nonfiction book and movie based on that fact, a comedy called "Men who stare at goats". The government thought that if you stared at a goat in just the right way you could kill it, or a person. They used your tax dollars to teach people how to do this. It didn't work. > Do some research. Translation: Go to some obscure website that you just found on Google, or read a pamphlet published by some outfit nobody ever heard of claiming revolutionary experimental results. No thank you. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 25 20:39:18 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 25 Jan 2010 14:39:18 -0600 Subject: [ExI] goats and gullibility In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> Message-ID: <4B5E0176.4050507@satx.rr.com> On 1/25/2010 2:12 PM, John Clark wrote: > There has even recently been a nonfiction book and movie based on that > fact, a comedy called "Men who stare at goats". There has been a highly fictionalized entertainment in book form by Jon Ronson (a very amusing writer), and an almost entirely fictionalized movie of that title. > The government thought > that if you stared at a goat in just the right way you could kill it, or > a person. This is untrue. > They used your tax dollars to teach people how to do this. This is untrue. > It didn't work. Since it didn't happen, it's not surprising that it didn't work. When you get your "information" about parapsychology from such sources, it's hardly surprising that you think it's all BULLSHIT. What's actually BULLSHIT is your disgraceful research methodology. A military remote viewer informed me when I asked him about this: > Jon Ronson?s book is > mostly disinformation. He makes many claims within his book based on > personal interviews he never had which are totally bogus. I?m totally > familiar with all of the ?goat? work at Fort Bragg and it has nothing to > do with any psychics. They are used to teach Special Forces medics how > to treat gun-shot and other wounds in the field ? period. Damien Broderick From jonkc at bellsouth.net Mon Jan 25 21:04:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 25 Jan 2010 16:04:33 -0500 Subject: [ExI] goats and gullibility. In-Reply-To: <4B5E0176.4050507@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> Message-ID: Ok, I didn't read the goat book or even see the movie so I won't defend it to my dying breath. However you inform me that a remote viewer thinks the book is Bullshit, and that makes me suspect that it may not be. The government certainly spent tax dollars on remote viewing and that's just as silly as the goat stuff. Damien, did you really think quotations from a self confessed remote viewer would aid in convincing me about anything? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 25 20:42:45 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 25 Jan 2010 15:42:45 -0500 Subject: [ExI] The digital nature of brains. In-Reply-To: <559425.24455.qm@web36506.mail.mud.yahoo.com> References: <559425.24455.qm@web36506.mail.mud.yahoo.com> Message-ID: On Jan 25, 2010, Gordon Swobe wrote: > It certainly is a problem once you understand the syntax-semantics problem. You just don't take it seriously or don't understand it. So if I study really really hard someday I too could reach enlightenment and make statements almost as brilliant and insightful as your pronouncements, such as your famous "even humans can't get semantics from syntax". John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 25 21:23:45 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 25 Jan 2010 15:23:45 -0600 Subject: [ExI] goats and gullibility. In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> Message-ID: <4B5E0BE1.8030403@satx.rr.com> On 1/25/2010 3:04 PM, John Clark wrote: > Damien, did you really think quotations from a self confessed remote > viewer would aid in convincing me about anything? Of course not. Evidence from someone actually involved? Sorry I couldn't find something from your news source, the National Enquirer. From pharos at gmail.com Mon Jan 25 22:10:10 2010 From: pharos at gmail.com (BillK) Date: Mon, 25 Jan 2010 22:10:10 +0000 Subject: [ExI] goats and gullibility In-Reply-To: <4B5E0176.4050507@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> Message-ID: On 1/25/10, Damien Broderick wrote: > A military remote viewer informed me when I asked him about this: > > > Jon Ronson?s book is > > mostly disinformation. He makes many claims within his book based on > > personal interviews he never had which are totally bogus. I?m totally > > familiar with all of the ?goat? work at Fort Bragg and it has nothing to > > do with any psychics. They are used to teach Special Forces medics how > > to treat gun-shot and other wounds in the field ? period. > > Guy Savelli was the Stargate psychic that Ronson said told him about the goats. He is still around and will sell you training on how to kill people with your mind. There are other versions of his goat killing story around. Colonel John B. Alexander insists that he actually struck the goat with a martial arts blow. Michael Shermer writes another version: A man named Guy Savelli told Ronson that he had seen soldiers kill goats by staring at them, and that he himself had also done so. But as the story unfolds we discover that Savelli is recalling, years later, what he remembers about a particular ?experiment? with 30 numbered goats. Savelli randomly chose goat number 16 and gave it his best death stare. But he couldn?t concentrate that day, so he quit the experiment, only to be told later that goat number 17 had died. End of story. No autopsy or explanation of the cause of death. No information about how much time had elapsed; the conditions, like temperature, of the room into which the 30 goats had been placed; how long they had been there, and so forth. ----------------- But whatever happened, it does look as though at one time these psychics were trying to affect goats (and hamsters). BillK From jonkc at bellsouth.net Mon Jan 25 22:26:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 25 Jan 2010 17:26:49 -0500 Subject: [ExI] 1984 In-Reply-To: <4B5E0BE1.8030403@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <2d6187671001221056p1f0229ifbcd26f6ae94cb1d@mail.gmail.com> <4B5A0DBF.50104@satx.rr.com> <580930c21001231606l7d8b86c5g59caa0b1f8e5fbe3@mail.gmail.com> <4B5B9460.7060005@satx.rr.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E0BE1.8030403@satx.rr.com> Message-ID: <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> When I was a kid I read Orwell's 1984 and I thought the scenes where poor Winston Smith is being tortured in the Ministry of Love were horrifying, but the most horrifying part of all was when there was no violence at all and O'Brien was just telling Winston his vision of the future. What was so terrifying was that Orwell made it sound completely logical and you started to think the horror was inevitable. I recently reread the book because I wanted to see if it would effect me the same way after all these years. It did. I read a lot but to this day I would say that the two most horrifying things I have ever read was the true story in "The Hot Zone" where Richard Preston describes in graphic detail how Ebola virus liquified a man's internal organs while he was on a airplane, and the other was O'Brien's speech to Winston Smith about the future. Stephen King doesn't know horror, Orwell and Preston know Horror. I also recently reread Huxley's Brave New World, although it lacks the emotional kick in the gut impact of 1984 it probably better describes our future and may even give us an outline to explain the Fermi Paradox. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frankmac at ripco.com Mon Jan 25 23:46:01 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Mon, 25 Jan 2010 18:46:01 -0500 Subject: [ExI] super bowl time again but no tom brady Message-ID: <003c01ca9e18$909a8590$ad753644@sx28047db9d36c> Las Vegas using their connection to the world unseen by rest of us, have made The Indiana(where I live) colts a five point favorite over the New Orleans Saints to win the superbowl in Miami in two weeks. I know none of you care about the superbowl, but this year two Billion dollars will be bet on the outcome more money than any of you have, just a guess on that one, and since it is a future event many oh so many of us fools will be checking their PSI to pick a side to back that choice with their cold hard cash on who is going to win. Saints, Colts god help me I wait god= saints yes that's it Bet the Saints as they have god on their side. I know the saints on this list will be hatred, and for the usual reasons religion is for the masses comes to mind., But the Colts on the other hand will be liked here as this list is mostly men isn't it. A test of PSI is in order, if the saints win god exist's, if the colts win the recession is over. How's that for PSI. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Jan 26 00:37:02 2010 From: spike66 at att.net (spike) Date: Mon, 25 Jan 2010 16:37:02 -0800 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <44FA78AEB074487E92B88EC3E311F9A6@spike> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><816ABDBA8A5A4F3AB07B7525C63D4A95@spike><8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com><8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com><8CC6BFD5BE9A66E-2194-A09@webmail-m011.sysops.aol.com> <44FA78AEB074487E92B88EC3E311F9A6@spike> Message-ID: > ...On Behalf Of spike > Subject: [ExI] most common error: was RE: thought controled > Third arm,(was EPOC EEG headset) > > ...in the age of software spell checkers and grammar > checkers, what is the most common spelling error? ... > It would need to be an error that spelled a different word, > and the new word would need to make a grammatically correct > sentence... spike Speaking of funny errors, this one popped up on FoxNews headlines today. Climate dog? Theories: 1) Murdoch is trying to save money by hiring semiliterates for copy editors? 2) They meant Climate Doc, and a C looks like a G in some fonts? 3) The spell checkers and grammar checkers missed this because it technically makes sense? 4) Some joker intentionally called the UN climate chief a dog and slipped it past the editors? {8^D Check it out: spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook.jpg Type: image/jpeg Size: 54166 bytes Desc: not available URL: From thespike at satx.rr.com Tue Jan 26 01:24:10 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 25 Jan 2010 19:24:10 -0600 Subject: [ExI] goats and gullibility In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <580930c21001240546p307e04bft580ffc8944a8cc64@mail.gmail.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> Message-ID: <4B5E443A.2070709@satx.rr.com> On 1/25/2010 4:10 PM, BillK wrote: > Guy Savelli was the Stargate psychic that Ronson said told him about the goats. Savelli was not in Star Gate. > He is still around and will sell you training on how to kill people > with your mind. No doubt. Damien Broderick From thespike at satx.rr.com Tue Jan 26 01:26:13 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 25 Jan 2010 19:26:13 -0600 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) In-Reply-To: References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><816ABDBA8A5A4F3AB07B7525C63D4A95@spike><8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com><8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com><8CC6BFD5BE9A66E-2194-A09@webmail-m011.sysops.aol.com> <44FA78AEB074487E92B88EC3E311F9A6@spike> Message-ID: <4B5E44B5.8070504@satx.rr.com> On 1/25/2010 6:37 PM, spike wrote: > Climate dog? Barking watchdog gave warning? From msd001 at gmail.com Tue Jan 26 02:05:59 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 25 Jan 2010 21:05:59 -0500 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) In-Reply-To: References: <4650F17F2E264B828808D74CB62D3AEB@spike> <8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com> <816ABDBA8A5A4F3AB07B7525C63D4A95@spike> <8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com> <8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com> <8CC6BFD5BE9A66E-2194-A09@webmail-m011.sysops.aol.com> <44FA78AEB074487E92B88EC3E311F9A6@spike> Message-ID: <62c14241001251805x54c2232ax499cbc2c6d296814@mail.gmail.com> 2010/1/25 spike > > > A dog in heat ? What a bitch. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook.jpg Type: image/jpeg Size: 54166 bytes Desc: not available URL: From spike66 at att.net Tue Jan 26 01:46:02 2010 From: spike66 at att.net (spike) Date: Mon, 25 Jan 2010 17:46:02 -0800 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <4B5E44B5.8070504@satx.rr.com> References: <4650F17F2E264B828808D74CB62D3AEB@spike><8CC686D9AEF3B4A-29E0-12322@webmail-m016.sysops.aol.com><816ABDBA8A5A4F3AB07B7525C63D4A95@spike><8CC687BB5D8F6C6-3978-4F51@webmail-d065.sysops.aol.com><8CC69159F8DA5FF-3AE8-3793@webmail-d024.sysops.aol.com><8CC6BFD5BE9A66E-2194-A09@webmail-m011.sysops.aol.com> <44FA78AEB074487E92B88EC3E311F9A6@spike> <4B5E44B5.8070504@satx.rr.com> Message-ID: > ...On Behalf Of Damien Broderick > Sent: Monday, January 25, 2010 5:26 PM > Subject: Re: [ExI] most common error: was RE: thought > controled Third arm, (was EPOC EEG headset) > > On 1/25/2010 6:37 PM, spike wrote: > > > Climate dog? > > Barking watchdog gave warning? Possibly, but the earlier headline, in a tall compressed font, said Climate Doc, referring to Dr. Rajendra Pachauri. I'm pretty sure it was just an error, particularly embarrassing since the article under it is about errors in the Climate Doc's report. When pointing out someone else's error, one needs to double check one's own comments. I just checked and now over an hour later "Climate Dog" is still up there. Clearly Fox is in no desperate hurry to fix it. Usually Fox's Science and Technology section is the best in the biz. spike Update: they just took that story out of the featured news, so it was up for a bit over an hour. From pharos at gmail.com Tue Jan 26 09:18:11 2010 From: pharos at gmail.com (BillK) Date: Tue, 26 Jan 2010 09:18:11 +0000 Subject: [ExI] goats and gullibility In-Reply-To: <4B5E443A.2070709@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> Message-ID: On 1/26/10, Damien Broderick wrote: > Savelli was not in Star Gate. > > Ooooh! Nitpick. The name Star Gate was not used until 1991. But those involved started around 1970 and went through many name changes, personnel changes and sub-projects. BillK From stefano.vaj at gmail.com Tue Jan 26 11:34:27 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 26 Jan 2010 12:34:27 +0100 Subject: [ExI] 1984 In-Reply-To: <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E0BE1.8030403@satx.rr.com> <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> Message-ID: <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> 2010/1/25 John Clark : > I also recently reread Huxley's Brave New World, although it lacks the > emotional kick in the gut impact of 1984 it probably better describes our > future and may even give us an outline to explain the Fermi Paradox. > ?John K Clark Both 1984 and Huxley's book describe worlds fundamentally stagnating. But I find ultimately more anti-transhumanist the second, especially in its depiction of a perversion of technology in view of a final stability, an end of history, and a freezing of technological development and technical prometheanism itself. -- Stefano Vaj From gts_2000 at yahoo.com Tue Jan 26 13:09:54 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 26 Jan 2010 05:09:54 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <580930c21001250556o41321d1br946a63362cb6405c@mail.gmail.com> Message-ID: <740524.63442.qm@web36501.mail.mud.yahoo.com> --- On Mon, 1/25/10, Stefano Vaj wrote: > What about unconscious understanding, since conscious does > not seem to mean anything specific in the context of this discussion? I think the word "conscious" means something very important in the context of this discussion. But I think I understand the point you want to make. I would not argue with the use of "unconscious understanding" to describe machine intelligence. -gts From gts_2000 at yahoo.com Tue Jan 26 13:47:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 26 Jan 2010 05:47:11 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <90948.84503.qm@web36503.mail.mud.yahoo.com> --- On Mon, 1/25/10, Stathis Papaioannou wrote: > You are saying that in addition to the symbol grounding > problem there the problem of attaching "meaning" to the symbols. As that article explains, symbol grounding requires both the ability to pick out referents and consciousness. We, but not computers, have the ability to hold the meanings of symbols in our minds as intentional objects, and to processs those meanings consciously as you do at this very moment. > You can't explain what this meaning is I just did. > but you feel that humans have it and computers > don't. No empirical test can ever convince you that > computers have it, because by definition there is no empirical test for > it. Apparently no analytic argument can convince you either. If I met an entity on the street that passed the TT, I would not know if that entity had semantics. However if I also knew that entity ran only formal programs then I would know from analytic arguments that it did not. >> Do you believe your desktop or laptop computer has > conscious understanding of the words you type? > > No... Good. Just doing a reality check there. :) You agree that your software/hardware system does not have conscious understanding of symbols (semantics) but you also argue that digital computers can have it. Let me ask you: what would it take for your desktop computer to acquire this capacity that you insist it could have but does not have? More ram? A faster processor? Multiple processors? A bigger hard drive? A better web-cam? A better cooling system? Better programs? What will it take? > Furthermore, if the computer was based on reverse engineering a human > brain then I would say it has to have the same consciousness as a human. I don't disagree with that, but I would not call that reverse engineered machine a software/hardware system. We may someday create conscious machines, but those machines won't look like digital computers. -gts From gts_2000 at yahoo.com Tue Jan 26 14:31:51 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 26 Jan 2010 06:31:51 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) Message-ID: <359809.40193.qm@web36507.mail.mud.yahoo.com> To the argument that "association equals symbol grounding" as has been bandied about... Modern word processors can reference words in digital dictionaries. Let us say that I write a program that does only that, and that it does this automagically at ultra-fast speed on a powerful Cray software/hardware system with massive or even infinite memory. When the human operator types in a word, the s/h system first assigns that word to a variable, call it W, and then searches for the word in a complete dictionary of the English language. It assigns the dictionary definition of W to another variable, call it D, and then makes the association W = D. The system then treats every word in D as it did for for the original W, looking up the definition of every word in the definition of W. It then does same for those definitions, and so on and so on through an indefinite number of branches until it nearly or completely exhausts the complete English dictionary.? When the program finishes, the system will have made every possible meaningful association of W to other words. Will it then have conscious understanding the meaning of W? No. The human operator will understand W, but s/h systems have no means of attaching meanings to symbols. The system followed purely syntactic rules to make all those hundreds of millions of associations without ever understanding them. It cannot get semantics from syntax. -gts From stefano.vaj at gmail.com Tue Jan 26 14:36:27 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 26 Jan 2010 15:36:27 +0100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <359809.40193.qm@web36507.mail.mud.yahoo.com> References: <359809.40193.qm@web36507.mail.mud.yahoo.com> Message-ID: <580930c21001260636s2b2e3857k9e53c5c147dd62c6@mail.gmail.com> 2010/1/26 Gordon Swobe : > Will it then have conscious understanding the meaning of W? No. Within the limits defined by exemple, certainly it does. It would not be working otherwise... -- Stefano Vaj From gts_2000 at yahoo.com Tue Jan 26 14:43:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 26 Jan 2010 06:43:44 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <580930c21001260636s2b2e3857k9e53c5c147dd62c6@mail.gmail.com> Message-ID: <994884.25551.qm@web36508.mail.mud.yahoo.com> --- On Tue, 1/26/10, Stefano Vaj wrote: > > Will it then have conscious understanding the meaning > of W? No. > > Within the limits defined by exemple, certainly it does. It > would not be working otherwise... Can explain what you mean by that? How can you say that the system has conscious understanding of W? Perhaps I should ask it this way: How can you say that the system has conscious understanding of W without removing the word "conscious" from your personal version of the English dictionary? -gts From max at maxmore.com Tue Jan 26 16:18:40 2010 From: max at maxmore.com (Max More) Date: Tue, 26 Jan 2010 10:18:40 -0600 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) Message-ID: <201001261618.o0QGIoOK027270@andromeda.ziaspace.com> After seeing the "Climate Dog" headline, I was primed to notice this painfully ambiguous headline this morning from MSNBC: Obama eyes freeze as CBO predicts huge deficit Ouch! Max From stathisp at gmail.com Tue Jan 26 16:34:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 27 Jan 2010 03:34:26 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <90948.84503.qm@web36503.mail.mud.yahoo.com> References: <90948.84503.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/27 Gordon Swobe : > As that article explains, symbol grounding requires both the ability to pick out referents and consciousness. You are saying that understanding causes the symbol grounding. I'm saying the symbol grounding causes the understanding. > We, but not computers, have the ability to hold the meanings of symbols in our minds as intentional objects, and to processs those meanings consciously as you do at this very moment. > >> You can't explain what this meaning is > > I just did. You have invented meaning as a mysterious entity which is bestowed on symbols by another mysterious entity, understanding, neither of which you can explain any further. By Occam's razor, it's simpler and consistent with all the known facts to say that meaning arises from the association of symbols. >> but you feel that humans have it and computers >> don't. No empirical test can ever convince you that >> computers have it, because by definition there is no empirical test for >> it. Apparently no analytic argument can convince you either. > > If I met an entity on the street that passed the TT, I would not know if that entity had semantics. However if I also knew that entity ran only formal programs then I would know from analytic arguments that it did not. You haven't presented an analytic argument. The symbol grounding problem is not an analytic argument against semantics being derived from syntax unless you question-beggingly define semantics as something that cannot be derived from syntax, although you are not at all bothered by its being miraculously derived from dumb matter. And if semantics can be derived from dumb matter there is no reason why it cannot be derived from the dumb matter in a computer, despite the handicap of that dumb matter being arranged to behave in an intelligent way. So I repeat, you have not even presented an argument to show that computers can't think. >>> Do you believe your desktop or laptop computer has >> conscious understanding of the words you type? >> >> No... > > Good. Just doing a reality check there. :) > > You agree that your software/hardware system does not have conscious understanding of symbols (semantics) but you also argue that digital computers can have it. Let me ask you: what would it take for your desktop computer to acquire this capacity that you insist it could have but does not have? More ram? A faster processor? Multiple processors? A bigger hard drive? A better web-cam? A better cooling system? Better programs? What will it take? Better software and hardware up to the task of running it, of course. At present, the closest we have come to a computer model of the brain is a simulation of a small sliver of rat cortex, with no clear evidence that it is actually behaving in a physiological manner. It might not work at all. If the whole rat brain is simulated and starts to spontaneously develop ratlike behaviour, then that would be evidence that it also has rat consciousness, such as it may be. It would be evidence of rat consciousness due to the logical impossibility of separating consciousness from intelligent behaviour, which at this point I will assume you agree by default with as you have passed up the opportunity to show where there is a logical error. >> Furthermore, if the computer was based on reverse engineering a human >> brain then I would say it has to have the same consciousness as a human. > > I don't disagree with that, but I would not call that reverse engineered machine a software/hardware system. We may someday create conscious machines, but those machines won't look like digital computers. Reverse engineering something means understanding it well enough to build a functional analogue. -- Stathis Papaioannou From stathisp at gmail.com Tue Jan 26 16:46:41 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 27 Jan 2010 03:46:41 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <359809.40193.qm@web36507.mail.mud.yahoo.com> References: <359809.40193.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/1/27 Gordon Swobe : > To the argument that "association equals symbol grounding" as has been bandied about... > > Modern word processors can reference words in digital dictionaries. Let us say that I write a program that does only that, and that it does this automagically at ultra-fast speed on a powerful Cray software/hardware system with massive or even infinite memory. When the human operator types in a word, the s/h system first assigns that word to a variable, call it W, and then searches for the word in a complete dictionary of the English language. It assigns the dictionary definition of W to another variable, call it D, and then makes the association W = D. > > The system then treats every word in D as it did for for the original W, looking up the definition of every word in the definition of W. It then does same for those definitions, and so on and so on through an indefinite number of branches until it nearly or completely exhausts the complete English dictionary. > > When the program finishes, the system will have made every possible meaningful association of W to other words. Will it then have conscious understanding the meaning of W? No. The human operator will understand W, but s/h systems have no means of attaching meanings to symbols. The system followed purely syntactic rules to make all those hundreds of millions of associations without ever understanding them. It cannot get semantics from syntax. But if you put a human in place of the computer doing the same thing he won't understand the symbols either, no matter how intelligent he is. The symbols need to be associated with some environmental input, and then they have "meaning". Your claim is that the symbols have to be associated with the environmental input *and* an extra process, which is mysterious and scientifically superfluous, has to take place as well for the understanding to occur. It is mysterious because you haven't any way of explaining how something like a chemical reaction could give rise to meaning, and it is scientifically superfluous because everything would work perfectly well without this extra step. So Occam's razor would suggest the more economic theory is that meaning is simply that which occurs when symbols are grounded in environmental input. -- Stathis Papaioannou From sparge at gmail.com Tue Jan 26 17:01:03 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 26 Jan 2010 12:01:03 -0500 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: References: <359809.40193.qm@web36507.mail.mud.yahoo.com> Message-ID: How long will this silly thread last? It should have been obvious two weeks ago that it wasn't going to do anything but chase its tail. -Dave From jonkc at bellsouth.net Tue Jan 26 16:50:53 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 26 Jan 2010 11:50:53 -0500 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <90948.84503.qm@web36503.mail.mud.yahoo.com> References: <90948.84503.qm@web36503.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has written 4. > > We, but not computers, have the ability to hold the meanings of symbols in our minds as intentional objects, and to processs those meanings consciously as you do at this very moment. Stathis Papaioannou: >> You can't explain what this meaning is > > I just did. What you just said was that the meaning of meaning is the ability to hold meanings in our minds. And round and round we go. > If I met an entity on the street that passed the TT, I would not know if that entity had semantics. However if I also knew that entity ran only formal programs then I would know from analytic arguments that it did not. Your "analysis" goes like this: I Gordon Swobe fail to see how intelligence can produce consciousness and the only possible explanation for my failure is that Darwin was wrong and intelligence can not produce consciousness. I mean, I'm Gordon Swobe, what other explanation for my failure to see a connection could there possibly be? > You agree that your software/hardware system does not have conscious understanding of symbols (semantics) but you also argue that digital computers can have it. Let me ask you: what would it take for your desktop computer to acquire this capacity that you insist it could have but does not have? Intelligence. > [long tedious thought experiment] .... Will it then have conscious understanding the meaning of W? No. The human operator will understand W, but s/h systems have no means of attaching meanings to symbols. The system followed purely syntactic rules to make all those hundreds of millions of associations without ever understanding them. It cannot get semantics from syntax. As is your custom in thought experiments you simply declare what you are trying to prove. I really don't understand why you don't just keep the declarations and skip the thought experiment, it would save a lot of time. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jan 26 17:18:01 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jan 2010 11:18:01 -0600 Subject: [ExI] most common error In-Reply-To: <201001261618.o0QGIoOK027270@andromeda.ziaspace.com> References: <201001261618.o0QGIoOK027270@andromeda.ziaspace.com> Message-ID: <4B5F23C9.5050809@satx.rr.com> On 1/26/2010 10:18 AM, Max More wrote: > After seeing the "Climate Dog" headline, I was primed to notice this > painfully ambiguous headline this morning from MSNBC: > > Obama eyes freeze as CBO predicts huge deficit Possibly not quite as painful as the fabled WWI headline: ENEMY PUTSCH BOTTLES UP FRENCH From spike66 at att.net Tue Jan 26 17:22:09 2010 From: spike66 at att.net (spike) Date: Tue, 26 Jan 2010 09:22:09 -0800 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <201001261618.o0QGIoOK027270@andromeda.ziaspace.com> References: <201001261618.o0QGIoOK027270@andromeda.ziaspace.com> Message-ID: > ...On Behalf Of Max More > > After seeing the "Climate Dog" headline, I was primed to > notice this painfully ambiguous headline this morning from MSNBC: > > Obama eyes freeze as CBO predicts huge deficit > > Ouch! > > Max Waaaaaahahahahaaaa! Thanks Max. My eyes froze too when I see the deficit predictions. We really do harm to our language by the increasing use of nouns as verbs. When computers try to figure out humans by reading our books, the recent stuff may be hard to comprehend because of the ambiguity introduced by all the reprehensible verbing that we are doing. spike From jonkc at bellsouth.net Tue Jan 26 17:10:21 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 26 Jan 2010 12:10:21 -0500 Subject: [ExI] 1984 In-Reply-To: <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E0BE1.8030403@satx.rr.com> <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> Message-ID: <3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net> On Jan 26, 2010, Stefano Vaj wrote: > Both 1984 and Huxley's book describe worlds fundamentally stagnating. > But I find ultimately more anti-transhumanist the second, Well, despite its very serious flaws, I'd certainly rather live in Huxley's world than Orwell's. I agree that the society depicted in Brave New World is anti-transhumanist (but not more so than 1984!) but that doesn't mean the book is; pointing out valid potential dangers is not anti-transhumanist, and I think Huxley will prove to be a better prophet than Orwell. It may be the reason we can't find ET. But Orwell's book was more enjoyable, in a horrible sort of way. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Jan 26 17:56:11 2010 From: spike66 at att.net (spike) Date: Tue, 26 Jan 2010 09:56:11 -0800 Subject: [ExI] most common error: was RE: thought controled Third arm, (was EPOC EEG headset) In-Reply-To: <201001261618.o0QGIoOK027270@andromeda.ziaspace.com> References: <201001261618.o0QGIoOK027270@andromeda.ziaspace.com> Message-ID: <78B012EF962E475DBDDCAE90E5031D7B@spike> > ...On Behalf Of Max More > > After seeing the "Climate Dog" headline, I was primed to > notice this painfully ambiguous headline this morning from MSNBC: > > Obama eyes freeze as CBO predicts huge deficit > > Ouch! > > Max After a slight modification which didn't take away the humor, the headline is still there two hours later. A 1.35T deficit would cause more than my eyes to freeze: spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook.jpg Type: image/jpeg Size: 31809 bytes Desc: not available URL: From stefano.vaj at gmail.com Tue Jan 26 18:30:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 26 Jan 2010 19:30:44 +0100 Subject: [ExI] 1984 In-Reply-To: <3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E0BE1.8030403@satx.rr.com> <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> <3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net> Message-ID: <580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com> 2010/1/26 John Clark : > On Jan 26, 2010, ?Stefano Vaj wrote: >> Both 1984 and Huxley's book describe worlds fundamentally stagnating. >> But I find ultimately more anti-transhumanist the second, > > Well, despite its very serious flaws, I'd certainly rather live in Huxley's > world than Orwell's. I agree that the society depicted in Brave New World > is?anti-transhumanist (but not more so than 1984!) but that doesn't mean the > book is; pointing out valid potential dangers is not?anti-transhumanist, and > I think Huxley will prove to be a better prophet than Orwell. In fact, for "the second" I meant "the second world", not "the second book". But I suspect that it would be fair to consider *this* Huxley and his book as anti-transhumanist themselves, since in the Brave New World there is no real alternative to the world "as it is", the primitives being even more brute than the ordinary citizens, and the author substantially agreeing that "rocking the boat" would be too dangerous and ultimately pointless. > But Orwell's book was more enjoyable, in a horrible sort of way. Sure, because in it if you are a member of the internal party are at least under the illusion of that you are doing something meaningful... :-) The lack of any sense whatsoever is what makes the Brave New World especially frustrating, including for its reader... :-) -- Stefano Vaj From thespike at satx.rr.com Tue Jan 26 19:07:12 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jan 2010 13:07:12 -0600 Subject: [ExI] goats and gullibility In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> Message-ID: <4B5F3D60.90002@satx.rr.com> On 1/26/2010 3:18 AM, BillK wrote: >> Savelli was not in Star Gate. > Ooooh! Nitpick. Not a nitpick. Savelli was not involved with the scientific remote viewing program. > The name Star Gate was not used until 1991. > But those involved started around 1970 and went through many name > changes, personnel changes and sub-projects. So what? The long-time scientific director of the program, Dr. Edwin May (involved with SRI/SAIC's project from 1976-95), told me: "Never heard of him." The internet is clogged with lying bozos who either were never involved with the government psi program or were briefly connected to it in minor roles (like Major Ed Dames) and now boast that they were key geniuses and operatives, as they rake in the dough from gullible idiots. What I find breathtaking is the way various intelligent people on this list are equally gleefully gullible in quoting this drivel as if it proved something, instead of either keeping their mouths shut (if they know nothing about it) or doing their due diligence before speaking. There's plenty to complain about in the way STAR GATE frayed and degenerated toward the end (see READING THE ENEMY'S MIND, Major Paul H. Smith's history of his involvement in STAR GATE), but that's no more reason to dismiss the solid core work than it would be (like the godbotherers) to dismiss stem cell work because of the embarrassments in Korea. Damien Broderick From bbenzai at yahoo.com Tue Jan 26 19:25:46 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 26 Jan 2010 11:25:46 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <795077.37984.qm@web113602.mail.gq1.yahoo.com> Gordon Swobe wrote: > To the argument that "association equals symbol grounding" > as has been bandied about... > > Modern word processors can reference words in digital > dictionaries. Let us say that I write a program that does > only that, and that it does this automagically at ultra-fast > speed on a powerful Cray software/hardware system with > massive or even infinite memory. When the human operator > types in a word, the s/h system first assigns that word to a > variable, call it W, and then searches for the word in a > complete dictionary of the English language. It assigns the > dictionary definition of W to another variable, call it D, > and then makes the association W = D. > > The system then treats every word in D as it did for for > the original W, looking up the definition of every word in > the definition of W. It then does same for those > definitions, and so on and so on through an indefinite > number of branches until it nearly or completely exhausts > the complete English dictionary.? > > When the program finishes, the system will have made every > possible meaningful association of W to other words. Will it > then have conscious understanding the meaning of W? No. The > human operator will understand W, but s/h systems have no > means of attaching meanings to symbols. The system followed > purely syntactic rules to make all those hundreds of > millions of associations without ever understanding them. It > cannot get semantics from syntax. Again, this is pure obtuseness. The associations that ground symbols are obviously not associations with other abstract symbols. They are associations with *sensory information*. You know, that stuff that's the only possible contact that any mind can have with 'reality'. Please stop this misinterpretation of other people's arguments. It's tiresome, looks increasingly deliberate, and just reinforces my earlier opinion that you are in fact winding everyone up. I suspect that sooner or later you'll be asked to get back under your bridge lest any goats come to harm. Don't let it come to that, Gordon. Think about what people are saying, instead of just re-regurgitating the same tired old meaningless phrases. Please. Ben Zaiboc From pharos at gmail.com Tue Jan 26 20:59:24 2010 From: pharos at gmail.com (BillK) Date: Tue, 26 Jan 2010 20:59:24 +0000 Subject: [ExI] goats and gullibility In-Reply-To: <4B5F3D60.90002@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com> Message-ID: On 1/26/10, Damien Broderick wrote: > Not a nitpick. Savelli was not involved with the scientific remote viewing > program. > > So what? The long-time scientific director of the program, Dr. Edwin May > (involved with SRI/SAIC's project from 1976-95), told me: "Never heard of > him." The internet is clogged with lying bozos who either were never > involved with the government psi program or were briefly connected to it in > minor roles (like Major Ed Dames) and now boast that they were key geniuses > and operatives, as they rake in the dough from gullible idiots. > > OK. If you define Star Gate as May's SRI/SAIC remote viewing project, then I agree that Savelli and many other psychics were not involved in that particular part of the government psychic research program. The term 'Star Gate' is more normally used as an umbrella term for the many sub-projects involving psychic research. It was initiated by the CIA at SRI, but other branches of the government were running projects as well, including Army Intelligence and the Defense Intelligence Agency. Obviously, a military project trying to kill goats via psychic powers would not come under May's domain. By the way I noticed that many of the so-called psychics at SRI were $cientologists that credited their psychic powers to $cientology training, so they had very much a vested interest in persuading people that they were getting good results. Personally I don't think I would ever believe anything that was credited to $cientology super-powers. BillK From thespike at satx.rr.com Tue Jan 26 21:39:09 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jan 2010 15:39:09 -0600 Subject: [ExI] goats and gullibility In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com> Message-ID: <4B5F60FD.1040105@satx.rr.com> On 1/26/2010 2:59 PM, BillK wrote: > By the way I noticed that many of the so-called psychics at SRI were > $cientologists that credited their psychic powers to $cientology > training, so they had very much a vested interest in persuading people > that they were getting good results. Personally I don't think I would > ever believe anything that was credited to $cientology super-powers. This is discussed in OUTSIDE THE GATES OF SCIENCE. May and Puthoff, in their younger days, were sucked into $ci, which I find depressing, but both left it many years ago and today May, an empiricist, is scathing about the woo-woo affiliations of many "psychics". But I suspect it's understandable when you start seeing the evidence for this stuff that you'll look for psi-friendly ideologies to provide social and quasi-intellectual support. If mainstream science were not so aggressively/defensively antagonistic to the phenomena and spent more effort trying to deal with it, instead of blindly ignoring it, psi-gifted people would have somewhere more rational to turn for support. (In other cultures psychics tend to be Buddhists or Catholics, etc, and while that's arguably just as silly it's not as conspicuously venomous as the $ci monstrosity.) Damien Broderick From p0stfuturist at yahoo.com Tue Jan 26 20:45:15 2010 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 26 Jan 2010 12:45:15 -0800 (PST) Subject: [ExI] 1984 Message-ID: <725177.41135.qm@web59913.mail.ac4.yahoo.com> "> But Orwell's book was more enjoyable, in a horrible sort of way. -John Clark " Not merely what Stefano wrote concerning the "internal party", but also because 1984's is an outdated scenario, while Brave New World's is less so-- to write it Optimistically. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Jan 26 21:50:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 26 Jan 2010 16:50:33 -0500 Subject: [ExI] Psi and gullibility In-Reply-To: <4B5F3D60.90002@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com> Message-ID: <40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> Damien suppose just for the sake of argument, that psi did not exist, don't you think that many and perhaps most members of the human race would nevertheless think that it did? Of course they would, magical thinking has a powerful hold on the imagination and can only be resisted with effort. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Jan 26 22:22:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 26 Jan 2010 17:22:49 -0500 Subject: [ExI] 1984 In-Reply-To: <580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E0BE1.8030403@satx.rr.com> <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> <3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net> <580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com> Message-ID: <0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net> On Jan 26, 2010, Stefano Vaj wrote: > I suspect that it would be fair to consider *this* Huxley and his > book as anti-transhumanist themselves, since in the Brave New World > there is no real alternative Huxley pointed out a very real problem, there may be an alternative but Huxley didn't write about it because he didn't know what it was, and neither do I. The problem he described may just be as profound as profound can be. > Sure, because in it if you are a member of the internal party are at > least under the illusion of that you are doing something meaningful But in 1984 the "meaningful" thing you are doing, as party members freely admit, is causing more pain to exist in the world. No, I'd rather live in the Brave New World! > The lack of any sense whatsoever is what makes the Brave New World > especially frustrating, including for its reader. There is meaning in Brave New World, the pursuit of happiness; but that's it, nothing else. And that's just not enough to build Jupiter Brains and engineer the universe, or even the galaxy. I hope I'm wrong but that may be the reason we don't observe an engineered cosmos. Still, an eternity of lowbrow bliss may not be ideal but it beats the hell out of 1984. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jan 26 22:54:29 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jan 2010 16:54:29 -0600 Subject: [ExI] Psi and gullibility In-Reply-To: <40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com> <40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> Message-ID: <4B5F72A5.4050304@satx.rr.com> On 1/26/2010 3:50 PM, John Clark wrote: > Damien suppose just for the sake of argument, that psi did not exist, > don't you think that many and perhaps most members of the human race > would nevertheless think that it did? I played around with some other examples of desirable features that people wish were true, and even so actually are, but why bother. Of course many people wish purely mental communication and action (or magic, sorcery etc) are real and/or fear this greatly, but so what? That's what scientific protocols are for--to put such wishes under pressure, and see if the phenomenon at issue actually is there, and what constrains it, and finally to generate a theory able to accommodate it. The first is established, the second is somewhat in hand (given the astonishingly sparse funding compared to burger advertising or particle physics or sports or religion, not bad), the third is hardly begun. From max at maxmore.com Wed Jan 27 00:43:23 2010 From: max at maxmore.com (Max More) Date: Tue, 26 Jan 2010 18:43:23 -0600 Subject: [ExI] Psi and gullibility Message-ID: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> >On 1/26/2010 3:50 PM, John Clark wrote: > > > Damien suppose just for the sake of argument, that psi did not exist, > > don't you think that many and perhaps most members of the human race > > would nevertheless think that it did? My first reaction to this question was: "Yes, of course." Then -- me being an old and renewed major comic book enthusiastic -- it occurred to me that you could just as well ask: "Suppose that super-powers did not exist. (No one actually has super-strength, speed, intellect, agility, immortality, flight, invisibility, the ability to grow or shrink, etc. etc.) Don't you think that most members of the human race would nevertheless think that it did? This seems like an exactly parallel question. If we exclude psi powers as a super-power, it seems very clear that the correct answer is: No. No one actually believes that super-powers exist, even though they would very much like them to exist (especially if they could possess those powers themselves). Hmmm. Max From max at maxmore.com Wed Jan 27 00:52:49 2010 From: max at maxmore.com (Max More) Date: Tue, 26 Jan 2010 18:52:49 -0600 Subject: [ExI] How long can you survive without water? Message-ID: <201001270052.o0R0quBJ028416@andromeda.ziaspace.com> I thought you couldn't survive more than a few days -- a week maximum -- before dying of dehydration. This guy is alive after two weeks: http://www.msnbc.msn.com/id/35086799/ns/world_news-haiti_earthquake/ The details are minimal, so perhaps he did absorb some moisture somehow. Anyone know if surviving this long without water is unprecedented or not? Max From spike66 at att.net Wed Jan 27 01:11:18 2010 From: spike66 at att.net (spike) Date: Tue, 26 Jan 2010 17:11:18 -0800 Subject: [ExI] How long can you survive without water? In-Reply-To: <201001270052.o0R0quBJ028416@andromeda.ziaspace.com> References: <201001270052.o0R0quBJ028416@andromeda.ziaspace.com> Message-ID: On Behalf Of Max More > Subject: [ExI] How long can you survive without water? > > I thought you couldn't survive more than a few days -- a week maximum > -- before dying of dehydration. This guy is alive after two weeks: > > http://www.msnbc.msn.com/id/35086799/ns/world_news-haiti_earthquake/ > > The details are minimal, so perhaps he did absorb some > moisture somehow. Anyone know if surviving this long without > water is unprecedented or not? > > Max Also, can we confirm he wasn't trapped by one of the aftershocks? That would be a hell of a note, give up on finding survivors because the quake was ten days ago, then find out your rescue team farted around while someone perished, having been trapped four days before by an aftershock. I would hate to be a Haitian. spike From lacertilian at gmail.com Wed Jan 27 00:44:47 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 26 Jan 2010 16:44:47 -0800 Subject: [ExI] The digital nature of brains In-Reply-To: <795077.37984.qm@web113602.mail.gq1.yahoo.com> References: <795077.37984.qm@web113602.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > I suspect that sooner or later you'll be asked to get back under your bridge lest any goats come to harm. ?Don't let it come to that, Gordon. > Think about what people are saying, instead of just re-regurgitating the same tired old meaningless phrases. ?Please. I second the motion. The whole syntax/semantics/symbol grounding/consciousness debacle is too much of a memetically incestuous mess for me to lay all the blame on Gordon, but certainly it wouldn't exist if he stopped sustaining it. As far as I can tell, the situation is Gordon versus The World. It would be much easier to kill the thing by taking Gordon out than by taking The World out, and considering the fact that no one seems to have anything better to gain here but pride I am strongly in favor of bringing about a conclusion as swiftly as possible. I'd like to point out that I've been subscribed to Extropy-Chat for a little more than twelve days. I've gotten a rough average of five messages per day; sixty-one in all. In that span, the only thing I've learned is that the human brain is plastic enough for our minds to add novel limbs to their body images. I can't even apply that information right now. This is very low extropy! (To be fair, I also learned that submitting a 9,197 byte exploration of human communication in the abstract will not, as a rule, attract much attention. While I'm here: if anyone wants me to continue the thought started in "[ExI] The Throughput of English", please let me know. I'll finish it eventually anyway, but won't necessarily send it here.) From lacertilian at gmail.com Wed Jan 27 00:58:21 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 26 Jan 2010 16:58:21 -0800 Subject: [ExI] Psi and gullibility In-Reply-To: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> Message-ID: Max More : > you could just as well ask: "Suppose that super-powers did not exist. (No > one actually has super-strength, speed, intellect, agility, immortality, > flight, invisibility, the ability to grow or shrink, etc. etc.) Don't you > think that most members of the human race would nevertheless think that it > did?" Close! But, trickily, all of the powers explicitly mentioned here are of the easily-verified type. To be fair, we have to compare against much more esoteric examples; time travel might be a good choice, if it's the sort where no matter is actually transmitted. I would be hard pressed to say, one way or another, if most members of the human race believe that time travel of any kind is possible. I know that a significant number does, but I'm also convinced that the set of people who believe in precognition, for example, is substantially larger. Maybe precognition is just inherently more believable. You could just as easily argue that some factor is tipping the scales, though. Both theories are unacceptably presumptuous! From stathisp at gmail.com Wed Jan 27 02:14:46 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 27 Jan 2010 13:14:46 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: References: <795077.37984.qm@web113602.mail.gq1.yahoo.com> Message-ID: 2010/1/27 Spencer Campbell : > The whole syntax/semantics/symbol grounding/consciousness debacle is > too much of a memetically incestuous mess for me to lay all the blame > on Gordon, but certainly it wouldn't exist if he stopped sustaining > it. As far as I can tell, the situation is Gordon versus The World. It > would be much easier to kill the thing by taking Gordon out than by > taking The World out, and considering the fact that no one seems to > have anything better to gain here but pride I am strongly in favor of > bringing about a conclusion as swiftly as possible. I can understand your frustration because Gordon keeps repeating his claims without rebutting the arguments or counterclaims, but it's not as if we are debating whether the world is flat or not. These are difficult and important philosophical problems, and especially important for anyone interested in transhumanism. You may one day be in a position one day of having a brain prosthesis or uploading your mind to a computer. Do you think it's just trivially obvious that that would be OK, or have you arrived at the conclusion through extensive reading and thinking? -- Stathis Papaioannou From thespike at satx.rr.com Wed Jan 27 05:43:39 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jan 2010 23:43:39 -0600 Subject: [ExI] Psi and gullibility In-Reply-To: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> Message-ID: <4B5FD28B.50707@satx.rr.com> On 1/26/2010 6:43 PM, Max More wrote: >> > Damien suppose just for the sake of argument, that psi did not exist, >> > don't you think that many and perhaps most members of the human race >> > would nevertheless think that it did? > My first reaction to this question was: "Yes, of course." Then -- me > being an old and renewed major comic book enthusiastic -- it occurred to > me that you could just as well ask: "Suppose that super-powers did not > exist. (No one actually has super-strength, speed, intellect, agility, > immortality, flight, invisibility, the ability to grow or shrink, etc. > etc.) Don't you think that most members of the human race would > nevertheless think that it did? Max finally said "No" but I think it's clear that the answer is "Yes"--not in the sense that people think *they* have such powers, but all the litanies of defunct and active gods, demons, saints, angels, etc attest to this sort of belief. I suppose it's partly our infant memories of those supernatural humans, our parents, who were vastly stronger, smarter, knew the naughtiness of our secret thoughts, etc, and whom we wanted to please and have them love us; and partly the avid human response to superstimuli, even if we have to devise them in imagination. I think the key merit of John Clark's question is that it highlights why most people who pride ourselves on rationality despise the idea of psi, rather than remaining openminded and exploratory about it: mad humans seem to make it a special feature of their delusions. They project their intentions upon the neutral activities of others, they are threatened or excited by "ideas of reference", they feel others putting scary thoughts into their heads, etc. I regard it as possible that psi actually is responsible for a quite small proportion of this, but mostly I assume it's a brain pathology that is often abolished by antipsychotic drugs (as Stathis tell us). But since real psi appears to operate at a low level for most of us, and gets mixed up with wishful thinking, paradeilia, imagination, etc, it's very easy to suppose that those who make strong claims for it are in the same camp as the crazies. (And some of them, admittedly, do seem to be. Then again, the same sort of accusation is made by all those reasonable people against transhumanists, singularitarians, cryonicists, CR dieters, etc etc.) Damien Broderick From jonkc at bellsouth.net Wed Jan 27 05:44:02 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 27 Jan 2010 00:44:02 -0500 Subject: [ExI] Psi and gullibility In-Reply-To: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> Message-ID: <235159AA-0413-41F9-8520-18BE0A5CA6EA@bellsouth.net> On Jan 26, 2010, Max More wrote: Me: >> > Damien suppose just for the sake of argument, that psi did not exist, >> > don't you think that many and perhaps most members of the human race >> > would nevertheless think that it did? > > Suppose that super-powers did not exist. (No one actually has super-strength, speed, intellect, agility, immortality, flight, invisibility, the ability to grow or shrink, etc. etc.) Don't you think that most members of the human race would nevertheless think that it did? This seems like an exactly parallel question. If we exclude psi powers as a super-power, it seems very clear that the correct answer is: No. I don't understand, why would we exclude psi as a super-power? And psi is not alone, there is another super- power that is believed even more universally, prayer; and prayer doesn't work any better than psi does. Damien Broderick wrote: > That's what scientific protocols are for--to put such wishes under pressure, and see if the phenomenon at issue actually is there, and what constrains it, and finally to generate a theory able to accommodate it. The first is established, the second is somewhat in hand (given the astonishingly sparse funding compared to burger advertising or particle physics or sports or religion, not bad), the third is hardly begun. The proof of psi's existence is somewhat in hand? After a century's effort parapsychologist have precisely nothing to show for their efforts, they might as well have kept their hands in their pockets for a hundred years. And if psi was real you wouldn't need a 10 billion dollar accelerator to discover it, the 12 hour operating expenses of one Burger King restaurant would be more than enough to fund a study to prove definitively that psi existed; provided of course that it did exist. Let me expand on my question a little, assuming psi did not exist what would the mainstream scientific community say about it? They would say we can find no evidence of psi. What would most people tell the scientists? They would say you're too hidebound and aren't trying hard enough and you need to keep trying the same thing over and over again and spend more money and never ever give up and move on to something more productive. Sound familiar? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Wed Jan 27 05:03:54 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 26 Jan 2010 21:03:54 -0800 Subject: [ExI] The digital nature of brains In-Reply-To: References: <795077.37984.qm@web113602.mail.gq1.yahoo.com> Message-ID: Stathis Papaioannou : > You may one day be > in a position one day of having a brain prosthesis or uploading your > mind to a computer. Do you think it's just trivially obvious that that > would be OK, or have you arrived at the conclusion through extensive > reading and thinking? Neither! I have not given more thought to the subject than what I've attained with idle, wandering imaginings, and only casually investigated whatever relevant literature happened to be dropped in front of me. I am of the opinion that this is more than sufficient for the time being. Until I have the ability to realistically project a date at which I would have the opportunity to upload my mind, for example, I have much more pressing issues to devote my processing power to. But, to address the question: I would leap at the chance for brain prostheses without much concern for my existential status, but I doubt I could ever be convinced to upload my mind directly from its current wetware. After I've converted to a more reliable, well-understood technology maybe. Nanobots replacing neurons with silicon simulacra. That sort of thing. My current working theory goes no further than "I should be okay as long as I make the shift incrementally". From thespike at satx.rr.com Wed Jan 27 05:58:50 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jan 2010 23:58:50 -0600 Subject: [ExI] Psi and gullibility In-Reply-To: <235159AA-0413-41F9-8520-18BE0A5CA6EA@bellsouth.net> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <235159AA-0413-41F9-8520-18BE0A5CA6EA@bellsouth.net> Message-ID: <4B5FD61A.2010207@satx.rr.com> On 1/26/2010 11:44 PM, John Clark wrote: >> That's what scientific protocols are for--to put such wishes under >> pressure, and see if the phenomenon at issue actually is there, and >> what constrains it, and finally to generate a theory able to >> accommodate it. The first is established, the second is somewhat in >> hand (given the astonishingly sparse funding compared to burger >> advertising or particle physics or sports or religion, not bad), the >> third is hardly begun. > The proof of psi's existence is somewhat in hand? Sometimes you skim too fast to understand what has been written. My sentences above mean (1) it has been established that the phenomenon actually is there (2) some of what constrains it is known (3) a theory to accommodate it has hardly begun. And to repeat (tediously): since you and many other giant brains refuse to examine the available evidence, you don't even know that point (1) is true. That's really special, and I'm happy for you. Damien Broderick From jonkc at bellsouth.net Wed Jan 27 06:05:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 27 Jan 2010 01:05:49 -0500 Subject: [ExI] Psi and gullibility In-Reply-To: <4B5FD28B.50707@satx.rr.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> Message-ID: <78382108-4F9B-4C0D-B1CE-DC7755BD5656@bellsouth.net> On Jan 27, 2010, Damien Broderick wrote: > I think the key merit of John Clark's question is that it highlights why most people who pride ourselves on rationality despise the idea of psi I don't despise the idea of psi any more than I despise the idea that gravity was an inverse cubed law, it's just that neither exists in our universe. As a matter of fact I think it's a pity psi doesn't work, it would be great fun if it did, cold fusion too. > you and many other giant brains refuse to examine the available evidence I refuse to examine the available typing. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Wed Jan 27 06:38:32 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 26 Jan 2010 22:38:32 -0800 Subject: [ExI] "Brave New World" is a nasty book Message-ID: <4B5FDF68.40209@rawbw.com> John Clark wrote: > There is meaning in Brave New World, the pursuit of happiness; but that's it, nothing else. And that's just not enough to build Jupiter Brains and engineer the universe, or even the galaxy. I hope I'm wrong but that may be the reason we don't observe an engineered cosmos. Still, an eternity of lowbrow bliss may not be ideal but it beats the hell out of 1984. Very unlikely, IMO, that other species would have followed our trajectory closely enough to foster anything as recognizable as Brave New World. The anti-tech and anti-humanist and especially anti-H+ nature of Huxley's book is described by our man Dave Pearce very well at http://www.hedweb.com/huxley/bnw.htm Lee From jonkc at bellsouth.net Wed Jan 27 07:40:12 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 27 Jan 2010 02:40:12 -0500 Subject: [ExI] "Brave New World" is a nasty book In-Reply-To: <4B5FDF68.40209@rawbw.com> References: <4B5FDF68.40209@rawbw.com> Message-ID: <59ED726F-BE29-4064-A4CD-347B8FC97057@bellsouth.net> On Jan 27, 2010, Lee Corbin wrote: > > Very unlikely, IMO, that other species would have followed > our trajectory closely enough to foster anything as > recognizable as Brave New World. I think any intelligent being is going to have something like a happiness-sadness scale, and if we had complete control of it there would be great perhaps overwhelming temptation to open our personal preferences control panel and push that happiness slide switch as far right as it will go. And rather than accomplish something important to get that agreeable feeling of pride and satisfaction just crank up that feeling directly and don't bother accomplishing anything at all. > The anti-tech and anti-humanist and especially anti-H+ > nature of Huxley's book I think that's unfair to Huxley, he's not anti anything he's pointing out a valid problem. > is described by our man Dave > Pearce very well at > http://www.hedweb.com/huxley/bnw.htm I stopped reading when he said "John the Savage commits suicide soon after taking soma". The Savage never took soma. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Jan 27 10:46:36 2010 From: pharos at gmail.com (BillK) Date: Wed, 27 Jan 2010 10:46:36 +0000 Subject: [ExI] How long can you survive without water? In-Reply-To: <201001270052.o0R0quBJ028416@andromeda.ziaspace.com> References: <201001270052.o0R0quBJ028416@andromeda.ziaspace.com> Message-ID: On 1/27/10, Max More wrote: > I thought you couldn't survive more than a few days -- a week maximum -- > before dying of dehydration. This guy is alive after two weeks: > > http://www.msnbc.msn.com/id/35086799/ns/world_news-haiti_earthquake/ > > The details are minimal, so perhaps he did absorb some moisture somehow. > Anyone know if surviving this long without water is unprecedented or not? > > The BBC says 12 days because he was trapped in an aftershock while trying to loot the shop, (which had been frequently looted). I would guess that even the 12 day figure is debatable. Survivalists have the general 3-3-3 rule for survival. 3 hours for freezing, 3 days without water, 3 weeks without food. This works fine as a rough average. If you are young and healthy and temperature and humidity are not extreme, you could probably double these estimates. Wikipedia says that an aircrew apparently survived 8 days in a life raft without water. One case in the Australian outback survived 12 days, but he ate some flowers and plants, which probably gave him some moisture. In the case of trapped people, lying still avoids loss of moisture by perspiration and sometimes condensation or rainwater can be obtained. So, survival without water varies depending on age, health and weather conditions. Babies and animals die after a few hours if left in a car in the hot sun. But 7 to 8 days in the open is probably about the limit (unless you can catch some rainwater). BillK From stefano.vaj at gmail.com Wed Jan 27 10:55:15 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 27 Jan 2010 11:55:15 +0100 Subject: [ExI] 1984 In-Reply-To: <0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5E0176.4050507@satx.rr.com> <4B5E0BE1.8030403@satx.rr.com> <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> <3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net> <580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com> <0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net> Message-ID: <580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com> 2010/1/26 John Clark : > Huxley pointed out a very real problem, there may be an alternative but > Huxley didn't write about it because he didn't know what it was, and neither > do I. The problem he described may just be as profound as profound can be. The alternative is probably to part altogether with the idea of "safety and happiness for the largest number" as exclusive societal goals and takes one's risk with progress and change, isn't it? > But in 1984 the "meaningful" thing you are doing, as party members freely > admit, is causing more pain to exist in the world. No, I'd rather live in > the Brave New World! What is especially curious, and indeed quite sadistic, in the 1984 ideology is that in that context suffering is the only conceivable parameter of the party's influence, since, as O'Brien says "if something is pleasurable, one might be doing it simply out of its own interest/will" (quoting by heart). In fact, *real* influence is rather measured on one's ability to determine what one considers pleasurable or at least desirable... > There is meaning in Brave New World, the pursuit of happiness; but that's > it, nothing else. And that's just not enough to build Jupiter Brains and > engineer the universe, or even the galaxy. On the contrary. Since any kind of change (let alone a posthuman one) implies some degree of impredictability (of... "singularitanianism", in the "etimological" sense of being beyond your the applicability of your "equations"), a Brave New World require that change, conflicts, progress, etc. be frozen and disposed of. -- Stefano Vaj From stathisp at gmail.com Wed Jan 27 12:09:52 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 27 Jan 2010 23:09:52 +1100 Subject: [ExI] "Brave New World" is a nasty book In-Reply-To: <59ED726F-BE29-4064-A4CD-347B8FC97057@bellsouth.net> References: <4B5FDF68.40209@rawbw.com> <59ED726F-BE29-4064-A4CD-347B8FC97057@bellsouth.net> Message-ID: 2010/1/27 John Clark : > I think any intelligent being is going to have something like a > happiness-sadness scale, and if we had complete control of it there would be > great perhaps overwhelming temptation to open our personal preferences > control panel and push that happiness slide switch as far right as it will > go. And rather than accomplish something important to get that agreeable > feeling of pride and satisfaction just crank up that feeling directly and > don't bother accomplishing anything at all. But if we had complete control of our brains we could arrange it so that the happiness is coupled to some activity we consider intrinsically interesting (Pearce talks about being motivated by gradients of pleasure rather than pleasure/pain). We could also arrange it so that we are not tempted to pervert this mechanism. -- Stathis Papaioannou From pharos at gmail.com Wed Jan 27 12:14:33 2010 From: pharos at gmail.com (BillK) Date: Wed, 27 Jan 2010 12:14:33 +0000 Subject: [ExI] Psi and gullibility In-Reply-To: <4B5FD28B.50707@satx.rr.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> Message-ID: On 1/27/10, Damien Broderick wrote: > I think the key merit of John Clark's question is that it highlights why > most people who pride ourselves on rationality despise the idea of psi, > rather than remaining openminded and exploratory about it: mad humans seem > to make it a special feature of their delusions. They project their > intentions upon the neutral activities of others, they are threatened or > excited by "ideas of reference", they feel others putting scary thoughts > into their heads, etc. I regard it as possible that psi actually is > responsible for a quite small proportion of this, but mostly I assume it's a > brain pathology that is often abolished by antipsychotic drugs (as Stathis > tell us). But since real psi appears to operate at a low level for most of > us, and gets mixed up with wishful thinking, paradeilia, imagination, etc, > it's very easy to suppose that those who make strong claims for it are in > the same camp as the crazies. (And some of them, admittedly, do seem to be. > Then again, the same sort of accusation is made by all those reasonable > people against transhumanists, singularitarians, cryonicists, CR dieters, > etc etc.) > > I think it is too simplistic to put psi claims down to only wishful thinking or madness. People are very complicated creatures, with multiple, always-changing motivations. Power, status, sex, con tricks, anything to earn a living, the motives are endless. The main problem with claiming the reality of psi is the inability to produce any practical use for it. This was the main reason the CIA management canceled Star Gate. They weren't really interested in the two opposing reviews, one anti, saying it didn't exist. and one pro, claiming statistically significant results. The CIA couldn't use it because even the believers admitted it was hit and miss and they never knew what was a 'hit' until they obtained on-site verification. Similarly, attempts were made to predict casino games. Again, they claimed statistical significance, but couldn't make money on it, even though there were strong financial incentives. This is not a new thought, of course. Psi researchers have been struggling with this for many years. If only.......... I found an interesting paper that discusses this. SPIRITUALITY AND THE CAPRICIOUS, EVASIVE NATURE OF PSI J.E. Kennedy For the National Conference on Yoga and Parapsychology, January, 2006, Visakhapatnam, India Version of 5/14/2007 Abstract: Many writers have noted that psi appears to be capricious and actively evasive. The evidence includes the unintended and undesired (a) reversal of direction of psi effects between and within studies, (b) loss of intended effects while unintended internal effects occur, (c) declines in effects for subjects, experimenters, and lines of research, and (d) failure to develop successful applications of psi. These characteristics are not consistent with the assumptions for statistical research and have not been explained. --------------- The author comes to the conclusion that psi can only be used to enhance spirituality because any attempt at practical use fails miserably. Well, that's one way of looking at it. The alternative is that it doesn't exist. BillK From gts_2000 at yahoo.com Wed Jan 27 13:16:41 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 27 Jan 2010 05:16:41 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) Message-ID: <79823.23614.qm@web36504.mail.mud.yahoo.com> --- On Tue, 1/26/10, Stathis Papaioannou wrote: This exchange looks to me like a breakthrough, Stathis. I wrote: >> To the argument that "association equals symbol >> grounding" as has been bandied about... >> >> Modern word processors can reference words in digital >> dictionaries. Let us say that I write a program that does >> only that, and that it does this automagically at ultra-fast >> speed on a powerful Cray software/hardware system with >> massive or even infinite memory. When the human operator >> types in a word, the s/h system first assigns that word to a >> variable, call it W, and then searches for the word in a >> complete dictionary of the English language. It assigns the >> dictionary definition of W to another variable, call it D, >> and then makes the association W = D. >> >> The system then treats every word in D as it did for >> for the original W, looking up the definition of every word >> in the definition of W. It then does the same for those >> definitions, and so on and so on through an indefinite >> number of branches until it nearly or completely exhausts >> the complete English dictionary. >> >> When the program finishes, the system will have made >> every possible meaningful association of W to other words. >> Will it then have conscious understanding the meaning of W? >> No. The human operator will understand W, but s/h systems >> have no means of attaching meanings to symbols. The system >> followed purely syntactic rules to make all those hundreds >> of millions of associations without ever understanding them. >> It cannot get semantics from syntax. You replied: > But if you put a human in place of the computer doing the > same thing he won't understand the symbols either, no matter how > intelligent he is. Absolutely right!! My example above works like the Chinese room, but with the languages reversed. We can imagine a Chinese man operating the word-association program inside the Cray computer. He will manipulate the English symbols according to the syntactic rules specified by the program, and he will do so in ways that appear meaningful to an English-speaking human operator, but he will never come to understand the English symbols. He cannot get semantics from syntax. This represents progress, because you argued just a few days ago that perhaps people and also computers really do get semantics from syntax. Now I think you that they do not. It looks like you agree with the third premise: A3: Syntax is neither constitutive of nor sufficient for semantics.' We might also add this corollary: 'A3a: The mere syntactic association of symbols is not sufficient for semantics' These truths are pretty easy to see, and now you see them. > The symbols need to be associated with some environmental input, > and then they have "meaning". Environmental input matters, no question about that. I'll address it in my next message. For now I would like you to tell me if you agree with the above. -gts From gts_2000 at yahoo.com Wed Jan 27 14:32:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 27 Jan 2010 06:32:36 -0800 (PST) Subject: [ExI] The Robot Reply to the CRA In-Reply-To: Message-ID: <515879.35649.qm@web36503.mail.mud.yahoo.com> --- On Tue, 1/26/10, Stathis Papaioannou wrote: > The symbols need to be associated with some environmental input, > and then they have "meaning". Your idea seems at first glance to make a lot of sense, so let's go ahead and add sensors to our digital computer so that it gets environmental inputs that correspond to the symbols. Let's see what happens: http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php -gts From gts_2000 at yahoo.com Wed Jan 27 15:08:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 27 Jan 2010 07:08:46 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <570635.88979.qm@web36506.mail.mud.yahoo.com> > I suspect that sooner or later you'll be asked to get > back under your bridge lest any goats come to harm. ?Don't > let it come to that, Gordon. I joined this group about 10 years ago and I'm not about to leave. I realize these arguments are not popular in this corner of the internet, but unlike some people here I'm not rude or abusive. Until recently I did not question the core beliefs of extropianism (uploading, digital brain prostheses and so on). I agree with Stathis that these are very important issues, and I'm grateful that he at least has the courage to examine them while also showing courtesy to me as I play the devil's advocate. -gts From ddraig at gmail.com Wed Jan 27 08:33:24 2010 From: ddraig at gmail.com (ddraig) Date: Wed, 27 Jan 2010 19:33:24 +1100 Subject: [ExI] goats and gullibility In-Reply-To: <4B5F60FD.1040105@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com> <4B5F60FD.1040105@satx.rr.com> Message-ID: 2010/1/27 Damien Broderick : > If > mainstream science were not so aggressively/defensively antagonistic to the > phenomena and spent more effort trying to deal with it, instead of blindly > ignoring it, psi-gifted people would have somewhere more rational to turn > for support. Oooh yeah, as someone who has been doing a lot of this stuff all his life, there's NO WAY AT ALL I'm going to enter into any sort of discussion on this sort of thing here. I have been subbed to the extropians list since 1993 or so, and I usually just use it as a url-mine, as generally ther discussion is far too vicious for me to be bothered with. Empty vessels, loudest noise etc. Funny, damien, I thought you'd be a hard-core sceptic. This is good to see. Dwayne (was hiscdcj at lux.latrobe.edu.au, then ddraig at pobox.com) -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From spike66 at att.net Wed Jan 27 16:27:40 2010 From: spike66 at att.net (spike) Date: Wed, 27 Jan 2010 08:27:40 -0800 Subject: [ExI] 1984 In-Reply-To: <580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><4B5E0176.4050507@satx.rr.com><4B5E0BE1.8030403@satx.rr.com><3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net><580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com><3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net><580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com><0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net> <580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com> Message-ID: <63635DB5A46545159CAC96BFC63615D6@spike> > ...On Behalf Of Stefano Vaj ... > > What is especially curious, and indeed quite sadistic, in the > 1984 ideology is that in that context suffering is the only > conceivable parameter of the party's influence, since, as > O'Brien says "if something is pleasurable, one might be doing > it simply out of its own interest/will" ... Stefano Vaj Ja. The goal would be a political party such that pleasure is the only conceivable paramater of the party's influence. spike From thespike at satx.rr.com Wed Jan 27 16:29:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 27 Jan 2010 10:29:11 -0600 Subject: [ExI] Psi and gullibility In-Reply-To: References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> Message-ID: <4B6069D7.1070402@satx.rr.com> On 1/27/2010 6:14 AM, BillK wrote: > I think it is too simplistic to put psi claims down to only wishful > thinking or madness. Of course, but I was suggesting that one motive for disregarding it, in current scientific quarters, is that it seems to resemble the delusions of the insane, or at best the sloppy correlation-equals-causation thinking of folk physics. Ironically, this is exactly the same erroneous reasoning process used in rejecting psi on that basis. (Think about it.) > The main problem with claiming the reality of psi is the inability to > produce any practical use for it. This was the main reason the CIA > management canceled Star Gate. Not true. It produced many practical outcomes unavailable (at a given time) by any other data source. > They weren't really interested in the > two opposing reviews, one anti, saying it didn't exist. and one pro, > claiming statistically significant results. The CIA couldn't use it > because even the believers admitted it was hit and miss and they never > knew what was a 'hit' until they obtained on-site verification. This is close to true (they weren't interested in process research, a cause of constant frustration to Ed May and his research team). Psi (so far) doesn't produce results as reliable as on the ground intel, although it did and still does supply important extra detail. But other factors were in play. Keep in mind that the US is the home of Dat Ole Time Religion, and that more than half the respondents in any poll of citizens declare their belief that the world was created by an anthropomorphic Father God some 6000 years ago. I've been told independently by several highly placed people in both the military and civilians wings of the psi program that it met extreme resistance, especially toward the end, from military decision makers who were convinced that the operatives were getting their results by demonic or diabolical means. (I mean, what else could it be? These data were frequently good, yet no known means could explain their acquisition. Obviously Satan's work!) It's not only "crazy ideas" like psi that run afoul of this Xian absurdity: consider refusal to fund needle exchange programs, legalize soft drugs, insistence on "abstinence" programs in schools, etc. It's amazing that anything sensible ever gets funded in the USA. (Yes, the reply will be that support for STAR GATE and its predecessors was an example of that zaniness--and certainly it had support from some odd people like General Stubblebine. But a major motive for closing it down was the Xian fundamentalism rife in the military.) > Similarly, attempts were made to predict casino games. Again, they > claimed statistical significance, but couldn't make money on it, even > though there were strong financial incentives. Some attempted applications did make money. It's difficult to bootstrap something like this, because trained operatives are thin on the ground. > I found an interesting paper that discusses this. > > > SPIRITUALITY AND THE CAPRICIOUS, EVASIVE NATURE OF PSI > J.E. Kennedy > For the National Conference on Yoga and Parapsychology, > January, 2006, Visakhapatnam, India Version of 5/14/2007 > > Abstract: Many writers have noted that psi appears to be capricious > and actively evasive. ... These characteristics are not consistent with the > assumptions for statistical research and have not been explained. > --------------- > The alternative is that it doesn't exist. You seem to have misunderstood the final sentence, quoted above. Kennedy is saying that psi is skittish, but that the empirical findings *remain inexplicable* using the null hypothesis. Damien Broderick From jonkc at bellsouth.net Wed Jan 27 17:30:57 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 27 Jan 2010 12:30:57 -0500 Subject: [ExI] 1984 and Brave New World In-Reply-To: <580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5E0176.4050507@satx.rr.com> <4B5E0BE1.8030403@satx.rr.com> <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> <3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net> <580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com> <0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net> <580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com> Message-ID: <4095387A-3086-44E8-8FE9-090BB15A3937@bellsouth.net> On Jan 27, 2010, at 5:55 AM, Stefano Vaj wrote: > The alternative is probably to part altogether with the idea of > "safety and happiness for the largest number" as exclusive societal > goals and takes one's risk with progress and change, isn't it? Well sure, but how likely is it that we will choose the path of progress especially when we can receive the pride of making great progress while sitting on our ass and without progressing one inch. If you think this is a debasement of the human spirit then all you need to do is change your mind, and I do mean CHANGE YOUR MIND. Now you think the idea is downright noble. > What is especially curious, and indeed quite sadistic, in the 1984 > ideology is that in that context suffering is the only conceivable > parameter of the party's influence, since, as O'Brien says "if > something is pleasurable, one might be doing it simply out of its own > interest/will" (quoting by heart). In fact, *real* influence is rather > measured on one's ability to determine what one considers pleasurable > or at least desirable. In Brave New World only happiness was important and nothing else, in 1984 only power was important ant nothing else; if you accept that as an axiom and add the further one that power is the power over minds and nothing else then what the inner party did in 1984 was quite logical. > a Brave New World require that change, conflicts, progress, etc. be frozen and disposed of. A Brave New World would be totally static, if you looked at it in 10,000 years things would be almost identical to what they are now. 1984 is not static, it is devolving; once newspeak became the primary language the wretched inhabitants could hardly even be called human. Stathis Papaioannou wrote: > But if we had complete control of our brains we could arrange it so > that the happiness is coupled to some activity we consider > intrinsically interesting But if you want to make progress you can't get pleasure just by glancing at that interesting thing, you must accomplish something significant in it; but doing significant things in interesting fields is hard and rare, and that means you won't be at maximum happiness very often. But who among us wouldn't want to be a little happier? No matter how happy we are we could always be a little happier, and that happiness slide switch is very easy to get to and would only take a slight movement of my finger to move it just a little way to the right, and then a little more, and then a little more, and then.... > We could also arrange it so that we are not tempted to pervert this mechanism. Well sure we could, but would we? I really don't know the answer to that. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 27 17:59:43 2010 From: spike66 at att.net (spike) Date: Wed, 27 Jan 2010 09:59:43 -0800 Subject: [ExI] dolphins making tools and developing a technology: RE: Psi and gullibility In-Reply-To: <4B5FD28B.50707@satx.rr.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> Message-ID: <602376332A22429C851265448732CF56@spike> The psi discussion here has produced an insight: if a phenomenon exists but we haven't the memetic infrastructure to support it, we wouldn't recognize it even if we saw it. We would seek alternative explanations, as I do with psi. In my misspent youth I recall being taught that humans were the only tool users, but that the notion was being challenged. As a fallback, the text book claimed that humans are definitely the only tool makers. Now we know plenty of nonhuman beasts do that too. Check this way cool dolphin behavior: http://www.youtube.com/watch?v=pQ50PYMXDCQ Note that the guy making the mud ring does not himself get fed, but the others do. So this requires some rethinking of our evolutionary memeset. spike From thespike at satx.rr.com Wed Jan 27 18:08:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 27 Jan 2010 12:08:27 -0600 Subject: [ExI] goats and gullibility In-Reply-To: References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com> <4B5F60FD.1040105@satx.rr.com> Message-ID: <4B60811B.3000903@satx.rr.com> On 1/27/2010 2:33 AM, Dwayne wrote: > Funny, damien, I thought you'd be a hard-core sceptic. This is good to see. I *am* a hard core skeptic. That's why I read the evidence and try to contact the people involved in extreme claims before shutting my mind down. It's interesting to see how the woo-woo believers respond to my book OUTSIDE THE GATES OF SCIENCE. For example, one guy blathers on Amazon: "...no denial of his ultra-materialistic Weltanschauung. This species of materialist is familiar - the sort whose reading list is coincident with the reccomendations of conservative rags: Dennet, Dawkins and other dawks. Amazing how sometimes the most outlandishly imaginative SF writers are dull as dishwater when it comes to the truly fantastic. DOuglas Adams was a case in point - his adulation of Dawkins and rabid anti-mysticism, as in 'last chance to see' stood in stark contrast to his Hitchhiker's Guide to the Galaxy. So too with this SF writer turned pop science author." Guess I must be doing something right... Damien Broderick From jonkc at bellsouth.net Wed Jan 27 18:20:30 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 27 Jan 2010 13:20:30 -0500 Subject: [ExI] Psi and gullibility. In-Reply-To: <4B6069D7.1070402@satx.rr.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> <4B6069D7.1070402@satx.rr.com> Message-ID: <49E1C0A0-F1F5-417A-AFB3-A43F7C9C4A7F@bellsouth.net> BillK wrote: > >> > I think it is too simplistic to put psi claims down to only wishful > thinking or madness. That would be a simple explanation for psi's popularity in the culture, but not I think a simplistic one. Damien Broderick wrote: > > It [psi] produced many practical outcomes unavailable (at a given time) by any other data source. Right, and I've got some swamp land I'd like to sell you. > Psi (so far) doesn't produce results as reliable as on the ground intel One of the great understatements of all time. > Keep in mind that the US is the home of Dat Ole Time Religion, and that more than half the respondents in any poll of citizens declare their belief that the world was created by an anthropomorphic Father God some 6000 years ago. I've been told independently by several highly placed people in both the military and civilians wings of the psi program that it met extreme resistance, especially toward the end, from military decision makers who were convinced that the operatives were getting their results by demonic or diabolical means. You are implying that the reason the Scientific Method can't confirm the existence of psi is the recent rise in republican bible thumpers. Never mind the fact that this lack of confirmation has been going on for centuries with zero progress, your excuse is MALE BOVINE FECAL MATERIAL! > Some attempted applications did make money. And some people have made money betting on the ponies, but I wouldn't invest in it. > It's difficult to bootstrap something like this, because trained operatives are thin on the ground. How many people who can read minds predict the future and see what's going on 12000 miles away do you need to make a successful company? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Jan 27 18:37:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 27 Jan 2010 12:37:27 -0600 Subject: [ExI] Psi and gullibility. In-Reply-To: <49E1C0A0-F1F5-417A-AFB3-A43F7C9C4A7F@bellsouth.net> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> <4B6069D7.1070402@satx.rr.com> <49E1C0A0-F1F5-417A-AFB3-A43F7C9C4A7F@bellsouth.net> Message-ID: <4B6087E7.1040306@satx.rr.com> On 1/27/2010 12:20 PM, John Clark wrote: > You are implying that the reason the Scientific Method can't confirm the > existence of psi is the recent rise in republican bible thumpers. John, you really don't seem capable of following along from one sentence to the next when your buttons are being pushed. What I said is that one reason the Star Gate program was shut down was that some military and congressional decision makers are infected by lunatic fundamentalist memes. This is not a matter of "implying"--it's testimony from those who were there. The "Scientific Method" doesn't play a large role in such decisions. If this discussion were about why nuclear power isn't used more widely in the US, or evolution or safe sex taught properly in all schools, or fluoride added to drinking water, or loads of money spent on negligible-senescence research, you can bet your ass the John Clarks would be in there howling about the ruinous influence of "people of faith" on political decision-makers (and they'd be right). Damien Broderick From jonkc at bellsouth.net Wed Jan 27 18:40:38 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 27 Jan 2010 13:40:38 -0500 Subject: [ExI] The digital nature of brains. In-Reply-To: <79823.23614.qm@web36504.mail.mud.yahoo.com> References: <79823.23614.qm@web36504.mail.mud.yahoo.com> Message-ID: <73A630D2-BB9A-4F25-A0B3-54E399F56BCB@bellsouth.net> On Jan 27, 2010, at 8:16 AM, Gordon Swobe wrote: > > It looks like you [Stathis] agree with the third premise: > A3: Syntax is neither constitutive of nor sufficient for semantics.' If syntax is not only insufficient for meaning but it's not even part of meaning then what good is it? Why do you read this list, or books, or talk to people? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From artillo at comcast.net Wed Jan 27 19:13:14 2010 From: artillo at comcast.net (artillo at comcast.net) Date: Wed, 27 Jan 2010 19:13:14 +0000 (UTC) Subject: [ExI] dolphins making tools and developing a technology: RE: Psi and gullibility In-Reply-To: <602376332A22429C851265448732CF56@spike> Message-ID: <1257338256.228461264619594283.JavaMail.root@sz0062a.westchester.pa.mail.comcast.net> This post reminds me a bit of David Brin's Uplift Saga... great books, worth another read if I can find them in my collection! ----- Original Message ----- From: "spike" To: "ExI chat list" Sent: Wednesday, January 27, 2010 12:59:43 PM GMT -05:00 US/Canada Eastern Subject: [ExI] dolphins making tools and developing a technology: RE: Psi and gullibility The psi discussion here has produced an insight: if a phenomenon exists but we haven't the memetic infrastructure to support it, we wouldn't recognize it even if we saw it. We would seek alternative explanations, as I do with psi. In my misspent youth I recall being taught that humans were the only tool users, but that the notion was being challenged. As a fallback, the text book claimed that humans are definitely the only tool makers. Now we know plenty of nonhuman beasts do that too. Check this way cool dolphin behavior: http://www.youtube.com/watch?v=pQ50PYMXDCQ Note that the guy making the mud ring does not himself get fed, but the others do. So this requires some rethinking of our evolutionary memeset. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jan 27 23:04:05 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 28 Jan 2010 10:04:05 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <79823.23614.qm@web36504.mail.mud.yahoo.com> References: <79823.23614.qm@web36504.mail.mud.yahoo.com> Message-ID: On 28 January 2010 00:16, Gordon Swobe wrote: >>> When the program finishes, the system will have made >>> every possible meaningful association of W to other words. >>> Will it then have conscious understanding the meaning of W? >>> No. The human operator will understand W, but s/h systems >>> have no means of attaching meanings to symbols. The system >>> followed purely syntactic rules to make all those hundreds >>> of millions of associations without ever understanding them. >>> It cannot get semantics from syntax. > > You replied: > >> But if you put a human in place of the computer doing the >> same thing he won't understand the symbols either, no matter how >> intelligent he is. > > Absolutely right!! > > My example above works like the Chinese room, but with the languages reversed. We can imagine a Chinese man operating the word-association program inside the Cray computer. He will manipulate the English symbols according to the syntactic rules specified by the program, and he will do so in ways that appear meaningful to an English-speaking human operator, but he will never come to understand the English symbols. He cannot get semantics from syntax. > > This represents progress, because you argued just a few days ago that perhaps people and also computers really do get semantics from syntax. Now I think you that they do not. > > It looks like you agree with the third premise: > > A3: Syntax is neither constitutive of nor sufficient for semantics.' > > We might also add this corollary: > > 'A3a: The mere syntactic association of symbols is not sufficient for semantics' > > These truths are pretty easy to see, and now you see them. I'm afraid I don't agree. The man in the room doesn't understand the symbols, the matter in the computer doesn't understand the symbols, but the process of computing *does* understand the symbols. Look at it this way: you understand the symbols, but you can't see how the understanding comes from syntax. You think it's impossible. But it looks even more impossible that the understanding should come from matter. I could make the statement "matter is neither constitutive nor sufficient for semantics". You don't have any answer to that other than to point to a brain and say it has understanding. But I can point to a brain say that it has understanding by virtue of the information processing it does. From your point of view a miracle has to occur in either case, but at least with the computational explanation we are in the same ballpark as symbols, semantics and syntax are all to do with information, while matter and meaning are utterly different things. And if you still stubbornly insist that it's the matter that has the understanding, then you can say that it's the matter in the computer responsible. -- Stathis Papaioannou From ablainey at aol.com Wed Jan 27 23:16:31 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 27 Jan 2010 18:16:31 -0500 Subject: [ExI] dolphins making tools and developing a technology: RE: Psi and gullibility In-Reply-To: <602376332A22429C851265448732CF56@spike> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com> <602376332A22429C851265448732CF56@spike> Message-ID: <8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com> Have you seen the birds that have not only discovered they can drop nuts on the road and passing cars will crack them. but also that if they drop the nuts at a pedestrian crossing. They can wait for the cars to stop at the light so they can retrieve tha nuts without risking getting run over! I can't remember whether they were Crows or Magpies. Either way corvids have to be up there with the smartest of animals. Also in London, some pigeons use the underground to travel between nesting and feeding sites. -----Original Message----- From: spike To: 'ExI chat list' Sent: Wed, 27 Jan 2010 17:59 Subject: [ExI] dolphins making tools and developing a technology: RE: Psi and gullibility The psi discussion here has produced an insight: if a phenomenon exists but we haven't the memetic infrastructure to support it, we wouldn't recognize it even if we saw it. We would seek alternative explanations, as I do with psi. In my misspent youth I recall being taught that humans were the only tool users, but that the notion was being challenged. As a fallback, the text book claimed that humans are definitely the only tool makers. Now we know plenty of nonhuman beasts do that too. Check this way cool dolphin behavior: http://www.youtube.com/watch?v=pQ50PYMXDCQ Note that the guy making the mud ring does not himself get fed, but the others do. So this requires some rethinking of our evolutionary memeset. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Jan 28 01:34:54 2010 From: spike66 at att.net (spike) Date: Wed, 27 Jan 2010 17:34:54 -0800 Subject: [ExI] dolphins making tools and developing a technology: RE: Psi and gullibility In-Reply-To: <8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike> <8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com> Message-ID: <75AD3D26B9E24931B21F63C77D5043FE@spike> ...On Behalf Of ablainey at aol.com Subject: Re: [ExI] dolphins making tools and developing a technology: RE: Psi and gullibility >...Have you seen the birds that have not only discovered they can drop nuts on the road and passing cars will crack them... I have witnessed this behavior exactly one time, on the road in Oregon from Roseburg to Coos Bay. I ran over the nuts and saw the birds come down after them. I was the tool used by the birds to crack hickory nuts. spike From kanzure at gmail.com Thu Jan 28 03:08:51 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 27 Jan 2010 21:08:51 -0600 Subject: [ExI] H+ Magazine: DIYbio - Growing movement takes on aging Message-ID: <55ad6af71001271908s1e36c40dxc3538ddd6fe91f6f@mail.gmail.com> Hey all, did anyone catch this one popping up in the news? DIYbio: Growing movement takes on aging http://hplusmagazine.com/articles/bio/diy-bio-growing-movement-takes-aging Also showing up on: http://science.slashdot.org/story/10/01/26/165254/Open-Source-Software-Meets-Do-It-Yourself-Biology http://www.reddit.com/r/Cryptogon/comments/aufme/diy_bio_a_growing_movement_takes_on_aging_h/ http://news.ycombinator.com/item?id=1081033 http://futurismic.com/2010/01/27/garage-ribofunk-redux-diy-biohacking-gaining-popularity/ Here's some of the more interesting comments: http://science.slashdot.org/comments.pl?sid=1525022&cid=30907022 "Many of these biology experiments require very expensive machines, such as microarray machines, as mentioned by the article. I don't know if purchasing refurbished machines is a wise choice since we don't want data quality to be compromised. Also, don't forget about service plans when the machines break or producing inconsistent output. Not to mention various reagents, other chemicals, and supplies such as microarray chips that make the experiment yields high quality data. These easily reach hundreds of dollars a piece. Also, purchasing such chemicals will get you labeled as a terrorist. Another issue is gathering the samples. If you're collecting yeast, that would be simple. Arabidopsis, other small plants, mice, or other small animals, you probably need quite some space. Humans? That won't be simple at all. You have to clear privacy issues, getting the research review board to sign papers, etc. Sample collection alone can cost you lots of money and time. You can always resort to publicly available data. But chances are that you won't be able to impress scientists much for going that route. Also, most of the important discoveries are already done on this data. Most likely, all you can do is to confirm existing results or to provide some tangential additional info." Someone replied: http://science.slashdot.org/comments.pl?sid=1525022&cid=30907562 "Sure some of the more exotic equipment will, probably, still be out of the hands of DIYers. However, one of the things that this movement is known for is designing home-made versions of some of the expensive lab-grade equipment (such as 30k+ rpm centerfuges from Dremels; digital optical microscopes from an optical scope and a webcam; home built electron microscopes; etc.) which, actually, work. Pair that with their willingness to publish their, individual, projects as step-by-step instructions and share all their info as a community and I think it's completely possible that their communal capabilities will ramp up, relatively, quickly. A similar effect can be seen in the, long existing, amateur astronomy community and the DIY CNC community." http://science.slashdot.org/comments.pl?sid=1525022&cid=30907092 "Downloading computational biology software, that you have no idea how to use, makes you a molecular biologist, the same way that downloading finite element analysis software that you don't know how to use, makes you a mechanical engineer, downloading a SPICE simulator that you don't know how to use, makes you an electrical engineer, or downloading Pr0n that you can't re-enact makes you a sex expert. At least the Pr0n is easier to apply than a FEM or SPICE package, it being a "pictorial diagram", the disadvantage being that it requires a member of the appropriate sex (and species!) to re-enact." and someone replied: "Sure. And downloading an IDE you have no idea how to use doesn't make you a programmer, either. But it can certainly be a good first step in that direction. Knowing how to use those tools properly is part of what a (molecular biologist|mechanical engineer|electrical engineer) does, so if you're interested in doing that, you'll want to learn. The way to learn something complex is to see it, fumble around with it, make some mistakes, figure out what caused them, take a look at the documentation, mess up again, take another look, and so on. How will you ever start that process without first getting your hands on the tool?" Red means break here http://science.slashdot.org/comments.pl?sid=1525022&cid=30908560 "I'm a mechanical engineer who uses finite element analysis every day. These days are numbered. Every year something new comes out that makes it even easier and more idiot-proof, heading towards the point where really anyone COULD do it. Red = "breaks here". "Would you like to use the Analysis Assistant?" The distinction between the expert and the automated amateur is diminishing. Remember when you needed to know HTML to have a web page? It's only now getting started with DIY biology, but just wait... the progress since last time might not be obvious, but it's happening." There was a really awesome point-by-point breakdown of a naysayer's issues with diybio: http://science.slashdot.org/comments.pl?sid=1525022&cid=30909342 So, yeah, very interesting feedback from the community. Article goes something like this: """ A movement is growing quietly, steadily, and with great speed. In basements, attics, garages, and living rooms, amateurs and professionals alike are moving steadily towards disparate though unified goals. They come home from work or school and transform into biologists: do-it-yourself biologists, to be exact. DIYbiology (?DIYbio?) is a homegrown synthesis of software, hardware, and wetware. In the tradition of homebrew computing and in the spirit of the Make space (best typified by o?Reilly?s Make Magazine), these DIYers hack much more than software and electronics. These biohackers build their own laboratory equipment, write their own code (computer and genetic) and design their own biological systems. They engineer tissue, purify proteins, extract nucleic acids and alter the genome itself. Whereas typical laboratory experiments can run from tens-of-thousands to millions of dollars, many DIYers knowledge of these fields is so complete that the best among them design and conduct their own experiments at stunningly low costs. With adequate knowledge and ingenuity, DIYbiologists can build equipment and run experiments on a hobbyist?s budget. As the movement evolves, cooperatives are also springing up where hobbyists are pooling resources and creating ?hacker spaces? and clubs to further reduce costs, share knowledge and boost morale. This movement, still embryonic, could become a monster ? a proper rival to industry, government, and academic labs. The expertise needed to make serious breakthroughs on a regular basis at home hasn?t yet reached a critical mass, but there are good reasons to believe that this day will soon come. Software DIYbio software has been around for a long time. Folding at home, which came out of Professor vinjay Pande?s group at Stanford Chemistry Department in 2000, is designed to perform computationally intensive simulations of protein folding and other molecular dynamics. FAH, as it?s known, is now considered the most powerful distributed computing cluster in the world. Open source software for bioinformatics, computational neuroscience, and computational biology is plentiful and continues to grow. On their own time, students, professors, entrepreneurs, and curious amateurs contribute to open source work that captures their interests. BioPerl and BioPython have hundreds of contributors and tens of thousands of users. Programs like GENESIS and NEURON have been downloaded by computational neuroscientists for over twenty years. The software part is easy. The FOSS/OSS machine is well established, and has been successful for a long time. As the shift to open source software continues, computational biology will become even more accessible, and even more powerful. (Red Hat has recently asked the US Supreme Court to bar all software patents, submitting an amicus brief to the Supreme Court in the ?Bilski case.? See Resources.) Hardware Biological research is expensive. Microscopes, pipetmen, PCR machines, polyacrylamide gels, synthesizers ? basics for any molecular biology lab ? run from hundreds to thousands of dollars apiece. Traditional experiments cost hundreds-of-thousands to millions of dollars to conduct. How can the hobbyist afford this equipment? Unless ?Joe (or Jill) the DIYBiologist? is extremely wealthy, they can?t. So instead of purchasing brand new equipment, DIYers like to find good deals at auction sites like eBay or Dovebid, refurbish discarded equipment from labs or biotech companies, or ? more and more frequently ? build it themselves. Hardware hacking has a rich history, filled with geek heroes, and these skills are being turned towards the creation of biotech equipment. On the bleeding edge of it all, some DIYbiologists are applying their skills to h+ technologies. SENS researchers John Schloendorn, Tim Webb, and Kent Kemmish are conducting life-extension research for the SENS Foundation, building equipment for longevity research, saving thousands of dollars doing it themselves. Stem cell extraction and manipulation, DIY prosthetics, DIY neural prosthetics, sensory enhancements, immune system testing, general tweaking of whatever system strikes the hobbyist?s fancy. The DIY SENS lab is headed by PhD candidate John Schloendorn. John is a last- year PhD student at Arizona State University. He volunteers full time for the SENS Foundation. Entering his lab was a mind-blowing experience. The ceilings were high, the lab itself was spacious and well-lit. It smelled of sawdust, the product of constructing the furniture on site. The equipment was handmade, but brilliantly so. Elegance and function were clear priorities. When a panel could be replaced with a tinted membrane, it was. When metal could be replaced by sanded wood, it was. The on-site laser was modified from a tattoo-removal system. Costs were down, but the technical skill involved in manufacturing was top notch. In addition to his own experiments, Schloendorn is building an incubator (no pun intended) for DIYbio engineers who work on fighting death. Schloendorn tells me that working by ourselves might only take us so far, but thinks it?s a great place to start (many successful discoveries and businesses were founded in someone?s garage). He believes that being a DIYer doesn?t mean you must ?go it alone,? but can include cooperation and teamwork. He cautions that since time and effort are limited, DIYers must choose carefully what they?re going to work on and do that which is most important for them. His personal priority is to solve parts of the aging question, and he?d obviously like many other DIYers to take up this challenge. ?I wanted to make a dent in the suffering and death caused by aging. It seemed like the SENS people were the smartest, most resourceful and best organized among those ambitious enough. Of course, there are also DIYers with no ambitions to save the world, who are content to ?make yogurt glow? in the basement for their own personal satisfaction.? The DIYbio community has a high-traffic mailing list, where projects are discussed, designs shared, and questions asked or answered. The community has worked on dozens of DIY designs: gel electrophoresis techniques, PCR machines, alternative dyes and gels, light microscopes, and DNA extraction techniques. All of them focus on enabling cheap and effective science. Wetware DNAThe most popular conception of wetware is the genome ? the language of life, the ultimate hackable code. Genetic engineering and (more recently) synthetic biology are the hallmarks of this effort. Synthetic biology takes genetic engineering and builds it into a scalable engineering framework. It is the synthesis of complex, biologically-based (or inspired) systems that display functions that do not exist in nature. In synthetic biology, genetic code is abstracted into chunks, colloquially known as biological ?parts.? These parts allow us to build increasingly complex systems: putting several parts together creates a ?device? that is regulated by start codons, stop codons, restriction sites, and similar coding regions known as ?features.? (Visit MIT?s Standard Registry of Biological Parts for more detailed information, and tutorials on how to make your own biological part.) These parts are primarily designed by undergraduates competing in the International Genetically Engineered Machine (iGEM) competition, the largest student synthetic biology symposium. At the beginning of the summer, student teams are given a kit of biological parts from the Registry of Standard Biological Parts. Working at their own schools over the summer, they use these parts, and new parts of their own design, to build biological systems and operate them in living cells. Randy Rettberg, director of the iGEM competition, says that iGEM is addressing the question: ?Can simple biological systems be built from standard, interchangeable parts and operated in living cells? Or is biology just too complicated to be engineered in this way?? The broader goals of iGEM include enabling the systematic engineering of biology, promoting the open and transparent development of tools for engineering biology, and helping to construct a society that can productively apply biological technology. If this sounds suspiciously like a front for DIYbio, that?s probably because it is. In addition to attracting the brightest young minds to the critical field of molecular biology, many of the founders of iGEM, including Drew Endy at Stanford, Tom Knight at MIT, and DIYbio-rep Mac Cowell are heavily involved in or supportive of the DIYbio community. The recent introduction of iGEM teams unaffiliated with universities (?DIYgem?) is a step towards an inclusive community, allowing anyone with the brain and the drive to participate at the level of academics. So many seeking, Around lampposts of today, Change is on the wind. ? Unknown Mainstream science is increasingly friendly to DIYbio. DIYbiologist Jason Bobe works on George Church?s Personal Genome Project (PGP), which shares and supports DIYbio?s drive to make human genome data available for anyone to use. How to get involved Join the DIYbio mailing list (see Resources). Anyone can join and it?s the best way to begin your involvement with DIYbio. You?ll want to check out their DIYbio forums, which are growing rapidly. You can also find a local group there and connect with like-minded DIYers. Have a look around the DIYbio.org site, which lists some of the current projects: BioWeatherMaps: ?Self-Assembly Required? Flash mobs meet consumer-generated science in the new DIYbio initiative Flashlabs, where they?ll be pulling-off a new large-scale collaborative science project annually for amateurs and enthusiasts worldwide. Case in point ? the BioWeatherMap initiative is a "global, grassroots, distributed environmental sensing effort aimed at answering some very basic questions about the geographic and temporal distribution patterns of microbial life." SKDB: ?Apt-Get for Real Stuff (Hardware)? Skdb is a free and open source hardware package management system. The idea is to let the user ?make? a project by using all of the packaged hardware out on the web, so that the wheel isn?t reinvented every time a new project is started. The package includes milling machines, gel boxes, semiconductor manufacturing processes, fabratories, robot armies, wetlab protocols... everything. At the moment, they?re working on OpenCASCADE integration. Package maintainers from the DIYbio and open manufacturing communities assist others in bringing in projects into the system. Smartlab: ?Taking the Work out of Benchwork? Project Smartlab is aiming to build hardware to augment the benchtop science experience. This includes automatic data logging instruments with painless electronic lab book integration, video streaming with ?instant replay? features for those ?did-I-just-pipette-that-into-the-wrong-tube? moments, and interactive protocol libraries that guide new scientists and the scientifically enthusiastic alike through tricky protocols. The Pearl Gel Box: ?A Built-In Transilluminator and Casting Box for $199!? Want to get a jump start in DIYbio? The gel electrophoresis box is a basic tool for any DIYbiologist ? and they?re making kits so you can build your own. The Pearl Gel Box is cutting edge, open-source, and cheap. The participants in this project have created a professional grade gel box, available fully assembled or as free design documents. Plus, they want you to design new features like a built-in light filter or a mount for your digital cam. Image courtesy: diybio.orgThis is a mere glimpse into the vast undertaking that is DIYbio. Most DIYers work independently on projects that have significant personal meaning. Tyson Anderson, a specialist in the US Army, was struck by the lack of technological infrastructure during his time in Afghanistan. Anderson, a transhumanist as well as a DIYbiologist, was trying to discuss the implications of the Singularity with the friends he had made there. He realized it was difficult to conceive of a technological paradise in a world with limited electricity. He looked to DIYbio to make a difference, and is now engineering bioluminescent yeast to construct sugar-powered lamps for his friends in Afghanistan. Because there is much overlap between the DIYbio and transhumanist communities, it?s not surprising that many emerging projects focus on both. DIY-SENS is only the tip of the iceberg. DIYh+ is a fusion of DIYbio and h+, coordinating projects that allow willing individuals to experiment with practical human enhancement. Example projects include supplement/ exercise regimens, DIY-tDCS, DIY-EEG, and the personal harvesting of stem cells. From the group description: ?This group is a friendly cross between DIYbio and Open Source Medicine, with a dash of the ImmInst (Immortality Institute) forums [see Resources]. It?s the slightly edgier half of OSM. The community, ideally, should strive to foster an open and safe way for responsible adults to learn about do-it-yourself human enhancement. We do not believe in limiting the use of medical technology to therapy.? It?s not just enhancement technology that can benefit from DIYbiology. As the popular distrust of doctors grows, people will want to understand and monitor their own body. Likewise, as personalized medicine becomes a reality, we will probably see a rise in the number of hobbyists who treat their own bodies as machines to be worked on ? like a radio or a car ? branching out from personalized genomics to things like DIY stem cell extraction and manipulation, DIY prosthetics, DIY neural prosthetics and sensory enhancements (infrared vision, anyone?), immune system testing, and general tweaking of whatever system strikes the hobbyist?s fancy. This hacker?s paradise has not yet come to pass, but it is, perhaps, our exciting future. The road to true DIYbiology will not be easy. It?s not a magic bullet. It will probably not produce the next Bill Gates, at least not for a long time. Biology is hard, messy, and failure is more common than success. The knowledge required takes time and effort to acquire, and even then, so-called textbook knowledge is being revised almost daily. Many are attracted by the glamour of it all. They?re drawn to the romance of being a wetware hacker ? the existential thrill of tweaking life itself. They tend to become quickly disappointed by the slow, tedious, difficult path they face. Hobbyist biology is still in its infancy, and it will take a great deal of work before it reaches its potential. Few are more skeptical than DIYbiologists themselves. But many see no choice. Squabbles over sponsorship, intellectual property, and cumbersome regulations often prevent progress along more conventional lines. An anonymous DIYbiologist puts it this way: ?universities charge far more than the experiments really cost, and bureaucratic rules constantly retard real progress.? Questions of IP and ownership can hamstring innovation in industry, while concerns for national security prevent real information sharing in government science. Large, unwieldy bureaucracies and regulatory agencies find it difficult to keep pace with the breakneck speed of technological progress. Thought-monopolies make it unwise to promote new ideas while waiting for tenure, despite the fact that many central dogmas of biology change. Individuals willing to intelligently circumvent convention may find themselves stumbling into uncharted areas of biology where they may make new discoveries. Sleep Remaining Indicator Minimalist Clock. Photo credit: blog.makezine.com Indeed, it is only in the last century that biology has become an unreachable part of the academic-corporate-government machine. History?s naturalists, from Darwin to Mendel, are the true fathers of DIYbiology. They shared the spirit of discovery and scientific ingenuity and the drive to ?figure it out yourself.? No one told Isaac Newton to discover the laws of classical mechanics, and you can bet he was never given calculus homework. Einstein?s life would have been respectable if he hadn?t spent a silent decade questioning the nature of spacetime. They were driven by the simple need to know, and they would not be stopped by the incidental truth that no one had figured it out before. DIYbiology is perhaps a reemergence of this basic curiosity, applied to the study of life. As technology advances, let us study the workings of the cell the same way Newton may have studied the effects of gravity. Who wouldn?t want to know? Who can resist a peek at the mechanisms of our own existence? DIYbio may be young, but it is a symptom of our species? unbreakable curiosity. We will know these secrets too, someday. ?For me, chemistry represented an indefinite cloud of future potentialities which enveloped my life to come in black volutes torn by fiery flashes, like those which had hidden Mount Sinai. Like Moses, from that cloud I expected my law, the principle of order in me, around me, and in the world. I would watch the buds swell in spring, the mica glint in the granite, my own hands, and I would say to myself: I will understand this, too, I will understand everything.? ?Primo Levi Without a lab supervisor to guide them, DIYbiologists must take a carefully disciplined (and perhaps more genuine) approach to science. DIYbio has the potential to revive a noble tradition of pure scientific curiosity, with a modern, engineering twist. If you want to get something done, some day it really will be possible to do it yourself. Parijata Mackey is the Chief Science Officer of Humanity + and a senior at the University of Chicago, interested in applying synthetic biology, stem cell therapies, computational neuroscience, and DIYbio to life-extension and increased healthspan. """" - Bryan http://heybryan.org/ 1 512 203 0507 From ablainey at aol.com Thu Jan 28 03:40:39 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 27 Jan 2010 22:40:39 -0500 Subject: [ExI] The Robot Reply to the CRA In-Reply-To: <515879.35649.qm@web36503.mail.mud.yahoo.com> Message-ID: <8CC6DE2CB14807F-8510-6533@webmail-m044.sysops.aol.com> -------------------------Gordon Swobe 27 Jan 2010 14:32 >Your idea seems at first glance to make a lot of sense, so let's go ahead and >add sensors to our digital computer so that it gets environmental inputs that >correspond to the symbols. Let's see what happens: >http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php >-gts But how is this any different from the human brain? It can also be viewed as multiple rooms each containing an instance of Searle. Each searle is processing information in isolation. he never really understands, yet the combined result is a system we call 'intelligent'. Im fairly sure that the average frontal lobe could not pass the turing test by itself. In the same way the Chinese room in the robot, should actually be multiple Chinese rooms. It is the combined power of these rooms that determine whether the 'System' should pass the test. Not examination of isolated fragments and their inner workings. Also the discrete synaptic signals our brain uses to communicate with all its parts and the to I/O with the world is no different to the translation from a visual image, sound or any other sense into computer binary for the robot. I do not understand 'Synaptic' firings, does that mean I am not intelligent? = -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Jan 28 06:00:47 2010 From: spike66 at att.net (spike) Date: Wed, 27 Jan 2010 22:00:47 -0800 Subject: [ExI] funny headlines again In-Reply-To: <75AD3D26B9E24931B21F63C77D5043FE@spike> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike><8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com> <75AD3D26B9E24931B21F63C77D5043FE@spike> Message-ID: <4100958D1881411A8A5932681CE1980A@spike> Remember the days when the mainstream news stories were edited to perfection, at least grammatically if not memetically? This wasn't a headline, but it is an example of slipping standards in proofreading: "...Other prominent climate opinion makers faired poorly..." {8^D It was from this story: http://www.cnn.com/2010/WORLD/americas/01/27/climate.report.america.trust/in dex.html?hpt=T2 Damien is it getting harder to find good proofreaders? spike From stathisp at gmail.com Thu Jan 28 12:27:39 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 28 Jan 2010 23:27:39 +1100 Subject: [ExI] The Robot Reply to the CRA In-Reply-To: <515879.35649.qm@web36503.mail.mud.yahoo.com> References: <515879.35649.qm@web36503.mail.mud.yahoo.com> Message-ID: On 28 January 2010 01:32, Gordon Swobe wrote: > --- On Tue, 1/26/10, Stathis Papaioannou wrote: > >> The symbols need to be associated with some environmental input, >> and then they have "meaning". > > Your idea seems at first glance to make a lot of sense, so let's go ahead and add sensors to our digital computer so that it gets environmental inputs that correspond to the symbols. Let's see what happens: > > http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php Firstly, I doubt that a computer without real world input could pass the TT, any more than a human who is suffers complete sensory deprivation from birth could pass it. I think that both the human and the computer might be conscious, dreaming away in a virtual reality world, but it would be a fantastic coincidence if the dreams corresponded to the real world objects that the rest of us observe, which is what would be required to pass the TT. It would be different if the human or computer were programmed with real world data, but the data then represents sensory input stored in memory. Secondly, that article takes the CRA as primary, and not the assertion that syntax does not give rise to semantics, which you say the CRA is supposed to illustrate. If the original or robot CRA show what they claim to show, then they also show that the brain cannot have understanding, for surely the individual brain components have if anything even less understanding of what they are doing than the man in the room does. This is the systems response to the CRA. Searle's reply to this is "put the room in the man's head". This reply is evidence of a basic misunderstanding of what a system is. It seems that Searle accepts that individual neurons lack understanding and agrees that the ensemble of neurons working together has understanding. He then suggests putting the room in the man's head to show that in that case the man is the whole system, and the man still lacks understanding. But if the ensemble of neurons working together has understanding it does *not* mean that the ensemble of neurons have understanding! This is a subtle point and perhaps has not come across well when I have tried to explain it before. The best way to look at it is to modify the CRA so that instead of one man there are many men working together, maybe even one man for each neuron. Presumably you would say that this extended CR also lacks understanding, since all of the men lack understanding, either singly or collectively, if they got into a meeting to discuss their jobs. But how, then, does this differ from the situation of the brain? -- Stathis Papaioannou From ismirth at gmail.com Thu Jan 28 13:52:58 2010 From: ismirth at gmail.com (Isabelle Hakala) Date: Thu, 28 Jan 2010 08:52:58 -0500 Subject: [ExI] dolphins making tools and developing a technology: RE: Psi and gullibility In-Reply-To: <8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> <602376332A22429C851265448732CF56@spike> <8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com> Message-ID: <398dca511001280552k2173fc4dp3c692a254dfa7e7a@mail.gmail.com> Here is the youtube for the bird/nut/walkway thing: http://www.youtube.com/watch?v=PKvPPi0F_JY ~~~~~~~~~~~~~~~~~~~~~~~ Isabelle Hakala "Any person who says 'it can't be done' shouldn't be interrupting the people getting it done." "Do every single thing in life with love in your heart." 2010/1/27 > Have you seen the birds that have not only discovered they can drop nuts > on the road and > passing cars will crack them. but also that if they drop the nuts at a > pedestrian crossing. > They can wait for the cars to stop at the light so they can retrieve tha > nuts without > risking getting run over! > > I can't remember whether they were Crows or Magpies. Either way corvids > have to be up > there with the smartest of animals. > > Also in London, some pigeons use the underground to travel between nesting > and feeding sites. > > > > > -----Original Message----- > From: spike > To: 'ExI chat list' > Sent: Wed, 27 Jan 2010 17:59 > Subject: [ExI] dolphins making tools and developing a technology: RE: Psi > and gullibility > > > > > The psi discussion here has produced an insight: if a phenomenon exists but > > we haven't the memetic infrastructure to support it, we wouldn't recognize > > it even if we saw it. We would seek alternative explanations, as I do with > > psi. > > > In my misspent youth I recall being taught that humans were the only tool > > users, but that the notion was being challenged. As a fallback, the text > > book claimed that humans are definitely the only tool makers. Now we know > > plenty of nonhuman beasts do that too. > > > Check this way cool dolphin behavior: > > http://www.youtube.com/watch?v=pQ50PYMXDCQ > > > Note that the guy making the mud ring does not himself get fed, but the > > others do. So this requires some rethinking of our evolutionary memeset. > > > spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Thu Jan 28 16:47:25 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 28 Jan 2010 11:47:25 -0500 Subject: [ExI] The Robot Reply to the CRA In-Reply-To: References: <515879.35649.qm@web36503.mail.mud.yahoo.com> Message-ID: <8CC6E50B33BD276-57B0-446@webmail-d012.sysops.aol.com> Echooooooo, echo, echo -----Original Message----- From: Stathis Papaioannou To: gordon.swobe at yahoo.com; ExI chat list Sent: Thu, 28 Jan 2010 12:27 Subject: Re: [ExI] The Robot Reply to the CRA On 28 January 2010 01:32, Gordon Swobe wrote: > --- On Tue, 1/26/10, Stathis Papaioannou wrote: > >> The symbols need to be associated with some environmental input, >> and then they have "meaning". > > Your idea seems at first glance to make a lot of sense, so let's go ahead and add sensors to our digital computer so that it gets environmental inputs that correspond to the symbols. Let's see what happens: > > http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php Firstly, I doubt that a computer without real world input could pass the TT, any more than a human who is suffers complete sensory deprivation from birth could pass it. I think that both the human and the computer might be conscious, dreaming away in a virtual reality world, but it would be a fantastic coincidence if the dreams corresponded to the real world objects that the rest of us observe, which is what would be required to pass the TT. It would be different if the human or computer were programmed with real world data, but the data then represents sensory input stored in memory. Secondly, that article takes the CRA as primary, and not the assertion that syntax does not give rise to semantics, which you say the CRA is supposed to illustrate. If the original or robot CRA show what they claim to show, then they also show that the brain cannot have understanding, for surely the individual brain components have if anything even less understanding of what they are doing than the man in the room does. This is the systems response to the CRA. Searle's reply to this is "put the room in the man's head". This reply is evidence of a basic misunderstanding of what a system is. It seems that Searle accepts that individual neurons lack understanding and agrees that the ensemble of neurons working together has understanding. He then suggests putting the room in the man's head to show that in that case the man is the whole system, and the man still lacks understanding. But if the ensemble of neurons working together has understanding it does *not* mean that the ensemble of neurons have understanding! This is a subtle point and perhaps has not come across well when I have tried to explain it before. The best way to look at it is to modify the CRA so that instead of one man there are many men working together, maybe even one man for each neuron. Presumably you would say that this extended CR also lacks understanding, since all of the men lack understanding, either singly or collectively, if they got into a meeting to discuss their jobs. But how, then, does this differ from the situation of the brain? -- Stathis Papaioannou _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Thu Jan 28 16:58:39 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 28 Jan 2010 11:58:39 -0500 Subject: [ExI] 1984 and Brave New World In-Reply-To: <4095387A-3086-44E8-8FE9-090BB15A3937@bellsouth.net> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><4B5E0176.4050507@satx.rr.com><4B5E0BE1.8030403@satx.rr.com><3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net><580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com><3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net><580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com><0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net><580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com> <4095387A-3086-44E8-8FE9-090BB15A3937@bellsouth.net> Message-ID: <8CC6E5245D4BB39-57B0-829@webmail-d012.sysops.aol.com> I havn't read 'A brave new world', but if I understand what you have said about the comparison. Doesn't it come down to the age old dichotomy of carrot vs stick? The political currencies for the two societies being pleasure and fear; the strongest of human emotions. Each book being a warning of the logical outcome when the ballence is lost one way or the other. -----Original Message----- From: John Clark To: ExI chat list Sent: Wed, 27 Jan 2010 17:30 Subject: [ExI] 1984 and Brave New World On Jan 27, 2010, at 5:55 AM, Stefano Vaj wrote: The alternative is probably to part altogether with the idea of "safety and happiness for the largest number" as exclusive societal goals and takes one's risk with progress and change, isn't it? Well sure, but how likely is it that we will choose the path of progress especially when we can receive the pride of making great progress while sitting on our ass and without progressing one inch. If you think this is a debasement of the human spirit then all you need to do is change your mind, and I do mean CHANGE YOUR MIND. Now you think the idea is downright noble. What is especially curious, and indeed quite sadistic, in the 1984 ideology is that in that context suffering is the only conceivable parameter of the party's influence, since, as O'Brien says "if something is pleasurable, one might be doing it simply out of its own interest/will" (quoting by heart). In fact, *real* influence is rather measured on one's ability to determine what one considers pleasurable or at least desirable. In Brave New World only happiness was important and nothing else, in 1984 only power was important ant nothing else; if you accept that as an axiom and add the further one that power is the power over minds and nothing else then what the inner party did in 1984 was quite logical. a Brave New World require that change, conflicts, progress, etc. be frozen and disposed of. A Brave New World would be totally static, if you looked at it in 10,000 years things would be almost identical to what they are now. 1984 is not static, it is devolving; once newspeak became the primary language the wretched inhabitants could hardly even be called human. Stathis Papaioannou wrote: But if we had complete control of our brains we could arrange it so that the happiness is coupled to some activity we consider intrinsically interesting But if you want to make progress you can't get pleasure just by glancing at that interesting thing, you must accomplish something significant in it; but doing significant things in interesting fields is hard and rare, and that means you won't be at maximum happiness very often. But who among us wouldn't want to be a little happier? No matter how happy we are we could always be a little happier, and that happiness slide switch is very easy to get to and would only take a slight movement of my finger to move it just a little way to the right, and then a little more, and then a little more, and then.... We could also arrange it so that we are not tempted to pervert this mechanism. Well sure we could, but would we? I really don't know the answer to that. John K Clark = _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Thu Jan 28 17:00:19 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 28 Jan 2010 09:00:19 -0800 (PST) Subject: [ExI] The Robot Reply to the CRA Message-ID: <712647.45893.qm@web36508.mail.mud.yahoo.com> Before I answer this message, Stathis... I picked up a copy of _Rediscovery of Mind_. Chalmers quoted Searle out of context, just as I suspected. Searle considers three different versions of the silicon brain thought experiment and does not endorse of any of them. In the first version the chip-brain works fine (the extropian pipe dream), in the second the relationship between mind and behavior is broken (Chalmers quotes a part of this) and in the third version the patient retains a mental life but becomes paralyzed. Just three sentences before the Chalmers quote, Searle adamantly rejects the notion that chips can duplicate the causal powers of neurons. He explains the different versions of the thought experiment only by way of bringing attention to some important ideas in the philosophy of mind. More later. -gts --- On Thu, 1/28/10, Stathis Papaioannou wrote: > From: Stathis Papaioannou > Subject: Re: [ExI] The Robot Reply to the CRA > To: gordon.swobe at yahoo.com, "ExI chat list" > Date: Thursday, January 28, 2010, 7:27 AM > On 28 January 2010 01:32, Gordon > Swobe > wrote: > > --- On Tue, 1/26/10, Stathis Papaioannou > wrote: > > > >> The symbols need to be associated with some > environmental input, > >> and then they have "meaning". > > > > Your idea seems at first glance to make a lot of > sense, so let's go ahead and add sensors to our digital > computer so that it gets environmental inputs that > correspond to the symbols. Let's see what happens: > > > > http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php > > Firstly, I doubt that a computer without real world input > could pass > the TT, any more than a human who is suffers complete > sensory > deprivation from birth could pass it. I think that both the > human and > the computer might be conscious, dreaming away in a virtual > reality > world, but it would be a fantastic coincidence if the > dreams > corresponded to the real world objects that the rest of us > observe, > which is what would be required to pass the TT. It would be > different > if the human or computer were programmed with real world > data, but the > data then represents sensory input stored in memory. > > Secondly, that article takes the CRA as primary, and not > the assertion > that syntax does not give rise to semantics, which you say > the CRA is > supposed to illustrate. If the original or robot CRA show > what they > claim to show, then they also show that the brain cannot > have > understanding, for surely the individual brain components > have if > anything even less understanding of what they are doing > than the man > in the room does. This is the systems response to the CRA. > Searle's > reply to this is "put the room in the man's head". This > reply is > evidence of a basic misunderstanding of what a system is. > It seems > that Searle accepts that individual neurons lack > understanding and > agrees that the ensemble of neurons working together has > understanding. He then suggests putting the room in the > man's head to > show that in that case the man is the whole system, and the > man still > lacks understanding. But if the ensemble of neurons working > together > has understanding it does *not* mean that the ensemble of > neurons have > understanding! This is a subtle point and perhaps has not > come across > well when I have tried to explain it before. The best way > to look at > it is to modify the CRA so that instead of one man there > are many men > working together, maybe even one man for each neuron. > Presumably you > would say that this extended CR also lacks understanding, > since all of > the men lack understanding, either singly or collectively, > if they got > into a meeting to discuss their jobs. But how, then, does > this differ > from the situation of the brain? > > > -- > Stathis Papaioannou > From spike66 at att.net Thu Jan 28 17:30:46 2010 From: spike66 at att.net (spike) Date: Thu, 28 Jan 2010 09:30:46 -0800 Subject: [ExI] funny headlines again In-Reply-To: <4100958D1881411A8A5932681CE1980A@spike> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike><8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com><75AD3D26B9E24931B21F63C77D5043FE@spike> <4100958D1881411A8A5932681CE1980A@spike> Message-ID: <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> > Subject: [ExI] funny headlines again >> > Remember the days when the mainstream news stories were edited to perfection... {8^D spike Here's one from this morning. It kinda has a double meaning if one doesn't mind the overt nouning of an adjective. Since converts in Egypt need to hide, the converts are also coverts: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook.jpg Type: image/jpeg Size: 28630 bytes Desc: not available URL: From jonkc at bellsouth.net Thu Jan 28 18:05:58 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 28 Jan 2010 13:05:58 -0500 Subject: [ExI] Psi and gullibility. In-Reply-To: <4B6087E7.1040306@satx.rr.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> <4B6069D7.1070402@satx.rr.com> <49E1C0A0-F1F5-417A-AFB3-A43F7C9C4A7F@bellsouth.net> <4B6087E7.1040306@satx.rr.com> Message-ID: <15E1D8BE-19B0-4D2E-A73A-391AE1AB748F@bellsouth.net> On Jan 27, 2010, at 1:37 PM, Damien Broderick wrote: > > John, you really don't seem capable of following along from one sentence to the next when your buttons are being pushed. What I said is that one reason the Star Gate program was shut down was that some military and congressional decision makers are infected by lunatic fundamentalist memes. I know you were talking explicitly about one program but I stand by what I said "You are implying that the reason the Scientific Method can't confirm the existence of psi is the recent rise in republican bible thumpers". And if the bible thumpers agree with me that tax payer money shouldn't be spent on the Star Gate program, well, even a stopped clock is right twice a day. So I give the religious looney's their due, if you do the right thing for the wrong reason it's still the right thing. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Thu Jan 28 18:11:03 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 28 Jan 2010 13:11:03 -0500 Subject: [ExI] funny headlines again In-Reply-To: <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike><8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com><75AD3D26B9E24931B21F63C77D5043FE@spike><4100958D1881411A8A5932681CE1980A@spike> <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> Message-ID: <8CC6E5C63666AAB-731C-1651@webmail-m021.sysops.aol.com> A very apt headline considering the subject. good find! -----Original Message----- From: spike To: 'ExI chat list' Sent: Thu, 28 Jan 2010 17:30 Subject: Re: [ExI] funny headlines again > Subject: [ExI] funny headlines again >> > Remember the days when the mainstream news stories were edited to perfection... {8^D spike Here's one from this morning. It kinda has a double meaning if one doesn't mind the overt nouning of an adjective. Since converts in Egypt need to hide, the converts are also coverts: _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Jan 28 18:32:12 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 28 Jan 2010 13:32:12 -0500 Subject: [ExI] Skin to brains? In-Reply-To: <4B6087E7.1040306@satx.rr.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> <4B6069D7.1070402@satx.rr.com> <49E1C0A0-F1F5-417A-AFB3-A43F7C9C4A7F@bellsouth.net> <4B6087E7.1040306@satx.rr.com> Message-ID: <0A846A86-D8A8-4636-BF3C-00B6F74A3E98@bellsouth.net> In the current issue of the journal Nature researchers report they have turned adult skin cells directly into neurons without going through the intermediate stage of stem cells. The neurons even formed synapses with each other. Great stuff! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Jan 28 18:44:33 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 28 Jan 2010 19:44:33 +0100 Subject: [ExI] 1984 In-Reply-To: <63635DB5A46545159CAC96BFC63615D6@spike> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5E0BE1.8030403@satx.rr.com> <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> <3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net> <580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com> <0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net> <580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com> <63635DB5A46545159CAC96BFC63615D6@spike> Message-ID: <580930c21001281044od9b7cbfw8d6b5dd48702347c@mail.gmail.com> On 27 January 2010 17:27, spike wrote: > Ja. ?The goal would be a political party such that pleasure is the only > conceivable paramater of the party's influence. Or one so influential that everybody would consider as pleasurable whatever it thinks it is... :-) -- Stefano Vaj From gts_2000 at yahoo.com Thu Jan 28 19:05:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 28 Jan 2010 11:05:00 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <327154.86876.qm@web36501.mail.mud.yahoo.com> --- On Wed, 1/27/10, Stathis Papaioannou wrote: >> When the program finishes, the system will >> have made every possible meaningful association of W to >> other words. Will it then have conscious understanding of the >> meaning of W? No. The human operator will understand W but >> s/h systems have no means of attaching meanings to >> symbols. The system followed purely syntactic rules to make all >> those hundreds of millions of associations without ever >> understanding them. It cannot get semantics from syntax. I name my dictionary-word-association-program s/h system above "DWAP". > I'm afraid I don't agree. The man in the room doesn't > understand the symbols, the matter in the computer doesn't understand > the symbols, but the process of computing *does* understand the > symbols. You lost me there. Either DWAP has conscious understanding of W (in which case it 'has semantics'), or else DWAP does not have conscious understanding of W. First you agreed with me that DWAP does not have semantics, and you also made the excellent observation that a human who performed the same syntactic operations on English symbols would also not obtain conscious understanding of the symbols merely by virtue of having performed those operations. It would take something else, you said. But now it seems that you've reneged. Now you want to say that DWAP has semantics? I think you had it right the first time. So let me ask you again in clear terms: Does DWAP have conscious understanding of W? Or not? And would a human non-English-speaker obtain conscious understanding of W from performing the same syntactic operations as did DWAP? Or not? -gts From natasha at natasha.cc Thu Jan 28 19:02:49 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 28 Jan 2010 14:02:49 -0500 Subject: [ExI] funny headlines again In-Reply-To: <8CC6E5C63666AAB-731C-1651@webmail-m021.sysops.aol.com> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike><8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com><75AD3D26B9E24931B21F63C77D5043FE@spike><4100958D1881411A8A5932681CE1980A@spike> <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> <8CC6E5C63666AAB-731C-1651@webmail-m021.sysops.aol.com> Message-ID: <20100128140249.mtqb3t5ark4c0g8k@webmail.natasha.cc> Thumbs up. Quoting ablainey at aol.com: > > A very apt headline considering the subject. good find! > > > > > > > > > -----Original Message----- > From: spike > To: 'ExI chat list' > Sent: Thu, 28 Jan 2010 17:30 > Subject: Re: [ExI] funny headlines again > > >> Subject: [ExI] funny headlines again >>> >> Remember the days when the mainstream news stories were edited to >> perfection... {8^D spike > > Here's one from this morning. It kinda has a double meaning if one > doesn't mind the overt nouning of an adjective. Since converts in > Egypt need to hide, the converts are also coverts: > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > From nebathenemi at yahoo.co.uk Thu Jan 28 19:33:57 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Thu, 28 Jan 2010 19:33:57 +0000 (GMT) Subject: [ExI] funny headlines again In-Reply-To: Message-ID: <56535.47327.qm@web27003.mail.ukl.yahoo.com> Spike asked Damien if it was getting harder to find good proofreaders. Well, I don't know about the situation in San Antonio, but over in the UK the problem is finding media organisations willing to pay enough money to have a large enough subediting staff. The Daily Telegraph (a daily paper right-wing enough to be nicknamed "The Torygraph") outsourced their subediting to a firm in Australia, whose low-cost values shone through when I discovered a page headed with "Insert Breaker Text Here", suggesting the layout wasn't entirely finished before they went to press. Spike used an example from FOX - being a Murdoch-owned business, I expect the Great Cost-Slasher himself ordered across the board cuts in every department, drop in production standards be damned. Tom From spike66 at att.net Thu Jan 28 20:22:56 2010 From: spike66 at att.net (spike) Date: Thu, 28 Jan 2010 12:22:56 -0800 Subject: [ExI] funny headlines again In-Reply-To: <56535.47327.qm@web27003.mail.ukl.yahoo.com> References: <56535.47327.qm@web27003.mail.ukl.yahoo.com> Message-ID: <4B51DF2526204E1A9CBDDAF0C94EE6CB@spike> > ...On Behalf Of Tom Nowell ... > Subject: Re: [ExI] funny headlines again > > Spike asked Damien if it was getting harder to find good > proofreaders...The Daily Telegraph (a daily paper > right-wing enough to be nicknamed "The Torygraph") outsourced > their subediting to a firm in Australia... Tom Tom you gave me a hell of an idea. We can be rich and famous! (You can do famous, I'll be rich.) We can form a news agency, then outsource the copy editing to Africa. Reasoning: the Africans learn their English not the way you and I did, on the playground, but rather from a textbook. If you personally know African immigrants, they tend to have nearly perfect grammar and spelling, ja? Furthermore, Africa is a continent not already overflowing with money like Europe, Australia and North America, so we could get the labor force cheeeeapy cheap. What a business model! spike From gts_2000 at yahoo.com Thu Jan 28 20:26:22 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 28 Jan 2010 12:26:22 -0800 (PST) Subject: [ExI] The digital nature of brains. In-Reply-To: <73A630D2-BB9A-4F25-A0B3-54E399F56BCB@bellsouth.net> Message-ID: <701054.58414.qm@web36508.mail.mud.yahoo.com> --- On Wed, 1/27/10, John Clark wrote: > If syntax is not only insufficient for meaning > but it's not even part of meaning then what good is it? Programs use syntactic rules to manipulate symbols. They look at the forms of symbols, not at their meanings. Let X = some symbol Let Y = the meaning of that symbol Computers can do no more than this sort of operation: 'If symbol X looks like [some form] then do [some operation] on X.' Computers will keep getting better at this but they will never know Y. -gts From natasha at natasha.cc Thu Jan 28 20:36:54 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 28 Jan 2010 15:36:54 -0500 Subject: [ExI] funny headlines again In-Reply-To: <56535.47327.qm@web27003.mail.ukl.yahoo.com> References: <56535.47327.qm@web27003.mail.ukl.yahoo.com> Message-ID: <20100128153654.r525w0bmsw44ockw@webmail.natasha.cc> Wel sayd, Brilliant! Quoting Tom Nowell : > Spike asked Damien if it was getting harder to find good > proofreaders. Well, I don't know about the situation in San Antonio, > but over in the UK the problem is finding media organisations > willing to pay enough money to have a large enough subediting staff. > The Daily Telegraph (a daily paper right-wing enough to be > nicknamed "The Torygraph") outsourced their subediting to a firm in > Australia, whose low-cost values shone through when I discovered a > page headed with "Insert Breaker Text Here", suggesting the layout > wasn't entirely finished before they went to press. > > Spike used an example from FOX - being a Murdoch-owned business, I > expect the Great Cost-Slasher himself ordered across the board cuts > in every department, drop in production standards be damned. > > Tom > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Thu Jan 28 20:52:58 2010 From: spike66 at att.net (spike) Date: Thu, 28 Jan 2010 12:52:58 -0800 Subject: [ExI] funny headlines again In-Reply-To: <4B51DF2526204E1A9CBDDAF0C94EE6CB@spike> References: <56535.47327.qm@web27003.mail.ukl.yahoo.com> <4B51DF2526204E1A9CBDDAF0C94EE6CB@spike> Message-ID: <70DC31E04F3B4119A6380163F60C5B43@spike> > ...On Behalf Of spike > Subject: Re: [ExI] funny headlines again > > > ...On Behalf Of Tom Nowell > ... > > ...The Daily Telegraph...outsourced their subediting...Tom > > ...We can form a > news agency, then outsource the copy editing to Africa... spike > Expanding on my and Tom's idea of a low-cost news agency: Currently news agencies write their stories almost exclusively from a political point of view. Wars, battles, elections, etc, are all reported on a political meme basis, causes, ideas and so forth. But if one looks at the history of warfare, the really critical deciding factor is always the technology. It matters not one bit how righteous is the cause, or what deity is on your side, if the other guy has nukes. This goes all the way back, as recorded in the ancient book of Judges chapter 1 verse 19: ...And the LORD was with Judah; and he drave out the inhabitants of the mountain; but could not drive out the inhabitants of the valley, because they had chariots of iron... So here's the idea: report the news from a memetically totally neutral point of view, and from a science and technological point of view. Imagine you are a post singularity conscious machine of some sort, and you were given the chance to go back in time and watch your own ancestors fight among themselves (for all humans would be your mind-fathers), so you have no favorites and no favored causes, other than you always cheer for the most technologically advanced, for these are your more immediate predecessors. This is a little like slash dot, but they focus on technology to the exclusion of political conflict. What I am suggesting is that we report normal news from a machine's perspective. And outsource the copy editing to Africa to save money. spike From stathisp at gmail.com Thu Jan 28 22:16:50 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 29 Jan 2010 09:16:50 +1100 Subject: [ExI] The Robot Reply to the CRA In-Reply-To: <712647.45893.qm@web36508.mail.mud.yahoo.com> References: <712647.45893.qm@web36508.mail.mud.yahoo.com> Message-ID: On 29 January 2010 04:00, Gordon Swobe wrote: > Before I answer this message, Stathis... > > I picked up a copy of _Rediscovery of Mind_. Chalmers quoted Searle out of context, just as I suspected. Searle considers three different versions of the silicon brain thought experiment and does not endorse of any of them. In the first version the chip-brain works fine (the extropian pipe dream), in the second the relationship between mind and behavior is broken (Chalmers quotes a part of this) and in the third version the patient retains a mental life but becomes paralyzed. > > Just three sentences before the Chalmers quote, Searle adamantly rejects the notion that chips can duplicate the causal powers of neurons. He explains the different versions of the thought experiment only by way of bringing attention to some important ideas in the philosophy of mind. More later. I don't have access to the book and would be interested in the actual quote. It sounds like Searle may be saying the only thing he can consistently say: that the artificial neurons won't work because the brain's behaviour cannot be replicated by a computer. This also implies that philosophical zombies and weak AI are impossible. Functionalism remains intact, however, given that Chalmers' argument is only that IF a functional analogue of the brain could be made THEN that functional analogue would also necessarily duplicate consciousness. This is a logical requirement: not even God could make zombie neurons. -- Stathis Papaioannou From stathisp at gmail.com Thu Jan 28 22:35:17 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 29 Jan 2010 09:35:17 +1100 Subject: [ExI] The digital nature of brains. In-Reply-To: <701054.58414.qm@web36508.mail.mud.yahoo.com> References: <73A630D2-BB9A-4F25-A0B3-54E399F56BCB@bellsouth.net> <701054.58414.qm@web36508.mail.mud.yahoo.com> Message-ID: On 29 January 2010 07:26, Gordon Swobe wrote: > --- On Wed, 1/27/10, John Clark wrote: > >> If syntax is not only insufficient for meaning >> but it's not even part of meaning then what good is it? > > Programs use syntactic rules to manipulate symbols. They look at the forms of symbols, not at their meanings. > > Let X = some symbol > Let Y = the meaning of that symbol > > Computers can do no more than this sort of operation: > > 'If symbol X looks like [some form] then do [some operation] on X.' > > Computers will keep getting better at this but they will never know Y. That's all brains do too, and it feels like meaning. -- Stathis Papaioannou From max at maxmore.com Fri Jan 29 00:36:17 2010 From: max at maxmore.com (Max More) Date: Thu, 28 Jan 2010 18:36:17 -0600 Subject: [ExI] funny headlines again Message-ID: <201001290036.o0T0aTGb017069@andromeda.ziaspace.com> This one's not a type, but it's funny. Actually, since it's Fox, perhaps it's deliberate: Girl Who Got Wish to Meet Obama Dies From Cancer Max From Frankmac at ripco.com Fri Jan 29 00:36:27 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Thu, 28 Jan 2010 19:36:27 -0500 Subject: [ExI] sounds like a meme to me Message-ID: <002201caa07b$1c5d5260$ad753644@sx28047db9d36c> Could not help think that the the collapse on the U.S. and world markets were caused by a powerful meme, and now at Davos they are creating more without considering their giving life to memes which could cause a double drip recession. The New York Times recently published an article by R.Shiller titled "An echo Chamber of Boom and Bust." He wrote: What happened? Economic analysts often turn to indicators like employment, housing starts or retail sales as causes of a recovery, when in fact they are merely symptoms. For a fuller explanation, look beyond the traditional economic links and think of the world economy as driven by social epidemics, contagion of ideas and huge feedback loops that gradually change world views. These social epidemics can travel as swiftly as swine flu both spread from person to person and can reach every corner of the world in short order. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Fri Jan 29 00:44:55 2010 From: max at maxmore.com (Max More) Date: Thu, 28 Jan 2010 18:44:55 -0600 Subject: [ExI] funny headlines again Message-ID: <201001290045.o0T0j8nb004196@andromeda.ziaspace.com> Umm, NOT funny for the poor girl, of course. Sorry, poor choice of headlines there. >This one's not a type, but it's funny. Actually, since it's Fox, >perhaps it's deliberate: > >Girl Who Got Wish to Meet Obama Dies From Cancer From thespike at satx.rr.com Fri Jan 29 00:59:59 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 28 Jan 2010 18:59:59 -0600 Subject: [ExI] funny headlines again In-Reply-To: <201001290045.o0T0j8nb004196@andromeda.ziaspace.com> References: <201001290045.o0T0j8nb004196@andromeda.ziaspace.com> Message-ID: <4B62330F.1020606@satx.rr.com> On 1/28/2010 6:44 PM, Max More wrote: > Umm, NOT funny for the poor girl, of course. Sorry, poor choice of > headlines there. > >> This one's not a typo, but it's funny. Actually, since it's Fox, >> perhaps it's deliberate: >> >> Girl Who Got Wish to Meet Obama Dies From Cancer I don't see why or how it's funny. You mean Obama is the Walking Death? I can see why it'd be mildly amusing (although hard to believe) if she'd wanted to meet Cheney or, say, Mother Teresa. Damien Broderick From emlynoregan at gmail.com Fri Jan 29 01:11:28 2010 From: emlynoregan at gmail.com (Emlyn) Date: Fri, 29 Jan 2010 11:41:28 +1030 Subject: [ExI] geeky people, I need a couple of alpha testers... Message-ID: <710b78fc1001281711n576e2ee9k186683d2ca0298ca@mail.gmail.com> ... for something of very little consequence. I use an RSS to email gateway for reading newsfeeds. A couple of days ago the free service I use went to non-free. Crap. I figured "it can't be hard to make one of these", and wrote one. It's really, really, really bare bones but it does appear to work, and I could use some other users besides myself, to add a bit of load. So, if a few people could use it to subscribe to some stuff, I'd be eternally grateful, or at least I'd think you are cool for a little while. You subscribe to a feed by going here: http://www.productx.net You unsubscribe from a feed by clicking the unsubscribe link at the bottom of any notification email. Be warned, it's completely unconfigurable at the moment. Also, I *will* see all your feeds and email addresses, I'm monitoring this stuff of course, so don't subscribe to anything you'd be embarrassed for me to know about :-) Quick tech notes: it's written in python, running in google app engine, currently costing me zero. -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From lacertilian at gmail.com Fri Jan 29 01:27:06 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 28 Jan 2010 17:27:06 -0800 Subject: [ExI] sounds like a meme to me In-Reply-To: <002201caa07b$1c5d5260$ad753644@sx28047db9d36c> References: <002201caa07b$1c5d5260$ad753644@sx28047db9d36c> Message-ID: 2010/1/28 Frank McElligott > Could not help think that the the collapse on the U.S. and world markets were caused by a powerful meme, and now at Davos they are creating more without considering their giving life to memes which could cause a double drip recession. No doubt about it in my mind. Economic metastability is just one brilliant memetic engineer away from being a reality. Nothing is really stopping the US from becoming a utopia within a couple months, in purely physical terms, but obviously moving the gargantuan mass of bad ideas out of the way is going to take a whole lot longer than that. My estimate would be on the order of hundreds of years, optimistically, even accounting for a technological singularity in the meantime. Well, unless it's a malignant technological singularity. Then it'll just eat the planet and everything else is moot! Here's hoping our artificial god sees fit to devote a couple tons of quantum-computing matter toward giving us a nice afterlife. Preferably using some process that doesn't do this: http://nobodyscores.loosenutstudio.com/index.php?id=524 From lacertilian at gmail.com Fri Jan 29 01:34:35 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 28 Jan 2010 17:34:35 -0800 Subject: [ExI] funny headlines again In-Reply-To: <4B62330F.1020606@satx.rr.com> References: <201001290045.o0T0j8nb004196@andromeda.ziaspace.com> <4B62330F.1020606@satx.rr.com> Message-ID: Damien Broderick : > I don't see why or how it's funny. You mean Obama is the Walking Death? I > can see why it'd be mildly amusing (although hard to believe) if she'd > wanted to meet Cheney or, say, Mother Teresa. No! Of course it's funny! It's funny whenever ANYONE is the Walking Death. Even if the one in question is, in fact, the Walking Death. In that case, it's funny because it's true. Otherwise it's funny because it isn't true. Comedy! From msd001 at gmail.com Fri Jan 29 02:09:43 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 28 Jan 2010 21:09:43 -0500 Subject: [ExI] predictive neurons? Message-ID: <62c14241001281809n3cac83edy9a885b6f9f479758@mail.gmail.com> My thought below the quote from: 'How We Decide' And The Paralysis Of Analysis [Fresh Air 2010-01-22] http://www.npr.org/templates/transcript/transcript.php?storyId=122854276 [ begin quote - about 1/3 into the article ] But dopamine's even more important than that. It also seems to modulate many of our feelings, from the pleasure of eating chocolate cake or a crack high to feelings of fear and disgust. So it, in many respects, is one of the key neurotransmitters of emotion in general. And I think when you talk about decision-making, that's part of what makes it so important. And I think there's also been a lot of really interesting work, much of it done by a scientist named Wolfram Schultz at Cambridge University, that have shown how dopamine neurons react in detail to the real world. GROSS: Can you tell us about one of those studies? Mr. LEHRER: His experiments observe a really elegant protocol. He'll monitor individual dopamine neurons in the brain of a monkey, and he'll show that at first, these neurons respond to a reward, to a squirt of juice. So if you give the monkey a squirt of juice, these dopamine neurons will fire, and the monkey experiences the pleasure of getting a squirt of apple juice. That's the reward. But these neurons quickly adapt to the pleasure. So they quickly stop firing. That, you know, that makes perfect sense. You have an iPod. It makes you happy for a day or two, and then it stops, you know, giving you squirts of joy every time you look at the iPod. We adapt to these kind of hedonic pleasures. But what Wolfram Schultz found is that if you then play a bell before giving the monkey a squirt of juice, the dopamine neurons will fire whenever you play the bell. And if you flash a light before playing a bell before giving a squirt a juice, they'll fire whenever you flash a light. And if you play a song before flashing a light before ringing the bell, etc., etc., the dopamine neurons will always try to predict the reward. So they're called prediction neurons. Their job is to predict the first event that signals a reward is coming, a squirt of juice is coming. And so you can begin to understand how these neurons are so important in terms of allowing us to make sense of reality, in terms of finding the patterns and correlations and causations that allow us to actually figure out what's going to happen, and most importantly, from the perspective of evolution, figure out when our squirts of juice are going to arrive - try to make sense of those rewards. You know, and so you can begin to understand why they're so important for decision-making in terms of allowing us to make decisions that allow us to maximize our rewards. [ endquote ] Is it possible that over a lifetime of event chaining that some people do build some twitchiness that successfully predicts events in their local environment from a complex number of cues prior experience? And for those interested: Wolfram Schultz at Cambridge University: http://www.neuroscience.cam.ac.uk/directory/profile.php?Schultz Tobler PN, O?Doherty JP, Dolan R, Schultz W (2007), ?Reward value coding distinct from risk attitude-related uncertainty coding in human reward systems? J Neurophysiol 97:1621-1632 http://www.neuroscience.cam.ac.uk/publications/pubInfo.php?foreignId=pubmed%3A17122317 Temporal Difference Model Reproduces Anticipatory Neural Activity http://portal.acm.org/citation.cfm?id=1120460 From lacertilian at gmail.com Fri Jan 29 02:32:49 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 28 Jan 2010 18:32:49 -0800 Subject: [ExI] predictive neurons? In-Reply-To: <62c14241001281809n3cac83edy9a885b6f9f479758@mail.gmail.com> References: <62c14241001281809n3cac83edy9a885b6f9f479758@mail.gmail.com> Message-ID: My first reaction is: ugh! Pavlov again! Paired with the fact that we experience pleasure whenever someone agrees with us, AND whenever we disagree with someone else, suddenly the elaborate audio-visual lightshows on every single news (read: "opinion") network start to make a lot of sense. I'm not sure that's an intentional ploy, though. Do they change the intros whenever ratings drop, so that new associations begin forming in their audience? My second reaction is gladness for learning a bit more about dopamine, which balances out the brief spike in outrage. Still, I kind of wish my brain wasn't hardwired for disappointment. Nice catch, Mike. And, is that a preliminary hypothesis toward explaining precognition that I see you forming there? From max at maxmore.com Fri Jan 29 05:07:47 2010 From: max at maxmore.com (Max More) Date: Thu, 28 Jan 2010 23:07:47 -0600 Subject: [ExI] Is fusion success in sight? Message-ID: <201001290508.o0T581T0013564@andromeda.ziaspace.com> Hmmm. Experiments at the National Ignition Facility have given researchers confidence that they'll achieve a milestone in nuclear fusion sometime this year. http://cosmiclog.msnbc.msn.com/archive/2010/01/28/2187974.aspx Max From thespike at satx.rr.com Fri Jan 29 05:44:15 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 28 Jan 2010 23:44:15 -0600 Subject: [ExI] Humans caused Aussie megafauna extinction Message-ID: <4B6275AF.3030307@satx.rr.com> Humans caused Aussie megafauna extinction Friday, 29 January 2010 by Gemma Black Cosmos Online SYDNEY: The key anomaly in the Australian megafauna debate has been resolved, and "if people hadn't arrived in Australia, we'd still have the giants of yesteryear to admire," researchers report. Giant Australian marsupials, reptiles and flightless birds went extinct between 45,000 and 60,000 years ago. The reason behind the extinction of Australian megafauna has been the subject of debate for decades, said the lead author of the study, Richard Roberts from the University of Wollongong near Sydney, Australia. One theory is that Australian megafauna, including three-metre-high kangaroos and flightless birds weighing half a tonne, was driven to extinction at around the same time that humans arrived. The other theory is that the onset of the latest ice age, which peaked around 21,000 years ago, caused the extinction of the megafauna. Cuddie Springs anomaly While several archaeological sites support the first theory, one site at Cuddie Springs stood out as an anomaly. The study, published in the journal Science, showed that the supposedly undisturbed fossils had actually been moved, resolving the anomaly. Cuddie Springs, in western New South Wales, has long been promoted as a site containing both megafauna fossils and stone tools in the same sedimentary layer. By dating the surrounding sediments researchers found that the fossils and tools could be as young as 30,000 years - suggesting that humans and megafauna co-inhabited the continent for an extensive period of time. However, a geologist from the Australian National University, Rainer Grun, recently dated the fossils directly, using a combination of electron spin resonance (ESR) and uranium-series (U-series) dating. Cuddie Springs deposits had moved The results, published online in the journal Quaternary Science Reviews, showed that some of the supposedly undisturbed fossils in the sedimentary layer were actually more than 450,000 years old, suggesting that they had been reworked from much older deposits. "It seems that none of the fossils in the archaeological levels at Cuddie Springs are younger than the extinction window between 51,000 and 40,000 years ago," said Richard. "This pulls Cuddie back into line with all other sites on the continent, and removes its 'anomaly tag'," he said. "Rainer Grun leads the world in ESR/U-series dating, so his findings for Cuddie Springs deserve serious consideration," Richard said. Extinction is complicated, experts warn However, Danielle Clode, a zoologist from the University of Melbourne, and author of Prehistoric Giants: the Megafauna of Australia, thinks it might be misguided to suggest that there was just one cause behind the extinction of Australian megafauna. "I know how incredibly difficult it is to understand what exactly is driving species extinction today, even when we can watch and study the process directly," she said. "I think it is very unlikely that the decline of the megafauna was caused by a single factor, and even less likely that we are ever going to be able to work out what did drive their extinctions, in anything other than the most general terms." From spike66 at att.net Fri Jan 29 06:01:18 2010 From: spike66 at att.net (spike) Date: Thu, 28 Jan 2010 22:01:18 -0800 Subject: [ExI] Humans caused Aussie megafauna extinction In-Reply-To: <4B6275AF.3030307@satx.rr.com> References: <4B6275AF.3030307@satx.rr.com> Message-ID: <69A861A1EE3A466CB81D351D2503FDC8@spike> > ...On Behalf Of Damien Broderick ... > ... "if people hadn't arrived in > Australia, we'd still have the giants of yesteryear to > admire," researchers report... Thanks for the article Damien. What type of creatures do they refer to with the term "we"? What lifeform are these researchers? spike From pharos at gmail.com Fri Jan 29 09:19:30 2010 From: pharos at gmail.com (BillK) Date: Fri, 29 Jan 2010 09:19:30 +0000 Subject: [ExI] geeky people, I need a couple of alpha testers... In-Reply-To: <710b78fc1001281711n576e2ee9k186683d2ca0298ca@mail.gmail.com> References: <710b78fc1001281711n576e2ee9k186683d2ca0298ca@mail.gmail.com> Message-ID: On 1/29/10, Emlyn wrote: > ... for something of very little consequence. > > I use an RSS to email gateway for reading newsfeeds. A couple of days > ago the free service I use went to non-free. Crap. I figured "it can't > be hard to make one of these", and wrote one. > > Maybe I'm being extra geekly, but why on earth would you want to read the RSS firehose in your email? Don't you get enough email as it is? :) Google RSS Reader organizes the feeds into folders, just like email. If you want extra features, there are things like FeedDemon 3 or Feedly and others, that try to do more for you. I was relieved when I started to use Google Reader, so that I could abandon some news email lists and move them to RSS feeds instead. BillK From pharos at gmail.com Fri Jan 29 09:46:36 2010 From: pharos at gmail.com (BillK) Date: Fri, 29 Jan 2010 09:46:36 +0000 Subject: [ExI] Humans caused Aussie megafauna extinction In-Reply-To: <4B6275AF.3030307@satx.rr.com> References: <4B6275AF.3030307@satx.rr.com> Message-ID: On 1/29/10, Damien Broderick wrote: > Humans caused Aussie megafauna extinction > > SYDNEY: The key anomaly in the Australian megafauna debate has been > resolved, and "if people hadn't arrived in Australia, we'd still have the > giants of yesteryear to admire," researchers report. > > The Australian megafauna haven't gone extinct. I've heard stories about Australian women.......... BillK From stathisp at gmail.com Fri Jan 29 10:41:07 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 29 Jan 2010 21:41:07 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <327154.86876.qm@web36501.mail.mud.yahoo.com> References: <327154.86876.qm@web36501.mail.mud.yahoo.com> Message-ID: On 29 January 2010 06:05, Gordon Swobe wrote: > --- On Wed, 1/27/10, Stathis Papaioannou wrote: > >>> When the program finishes, the system will >>> have made every possible meaningful association of W to >>> other words. Will it then have conscious understanding of the >>> meaning of W? No. The human operator will understand W but >>> s/h systems have no means of attaching meanings to >>> symbols. The system followed purely syntactic rules to make all >>> those hundreds of millions of associations without ever >>> understanding them. It cannot get semantics from syntax. > > I name my dictionary-word-association-program s/h system above "DWAP". > >> I'm afraid I don't agree. The man in the room doesn't >> understand the symbols, the matter in the computer doesn't understand >> the symbols, but the process of computing *does* understand the >> symbols. > > You lost me there. Either DWAP has conscious understanding of W (in which case it 'has semantics'), or else DWAP does not have conscious understanding of W. It depends on whether DWAP is actually capable of natural language. It's easy to write a dictionary, but it isn't easy to write a program which passes the TT, which is why it hasn't been done. The brain does a lot of things subconsciously, arguably most things. You are not aware of the processing going on in your brain when you are having a conversation: you are only conscious of "words", "sentences", "ideas" which are the high level result of very complex low level switching type behaviour by neurons. Your error is to look at the low level behaviour of a computer and say that you don't see any meaning there, but ignore the fact that the same is true of the low level behaviour of the brain. So: if DWAP is capable of passing the TT then DWAP probably has conscious understanding, even if the components of DWAP manipulating the symbols understand no more than a neuron does. > First you agreed with me that DWAP does not have semantics, and you also made the excellent observation that a human who performed the same syntactic operations on English symbols would also not obtain conscious understanding of the symbols merely by virtue of having performed those operations. It would take something else, you said. > > But now it seems that you've reneged. Now you want to say that DWAP has semantics? I think you had it right the first time. > > So let me ask you again in clear terms: > > Does DWAP have conscious understanding of W? Or not? > > And would a human non-English-speaker obtain conscious understanding of W from performing the same syntactic operations as did DWAP? Or not? A human non-English-speaker would be unable to perform the operations of a DWAP capable of holding a conversation, but if he could, he would have no more understanding than the neurons have of what they are doing. However, he would be implementing an algorithm that has understanding, just as the dumb neurons (certainly much dumber than even a very dumb person) are implementing an algorithm that has understanding. Do you acknowledge this basic point about a system, that understanding emerges from the interaction of its components, even though the components individually or even collectively lack it? -- Stathis Papaioannou From emlynoregan at gmail.com Fri Jan 29 14:15:30 2010 From: emlynoregan at gmail.com (Emlyn) Date: Sat, 30 Jan 2010 00:45:30 +1030 Subject: [ExI] geeky people, I need a couple of alpha testers... In-Reply-To: References: <710b78fc1001281711n576e2ee9k186683d2ca0298ca@mail.gmail.com> Message-ID: <710b78fc1001290615v16e374ei1a6f20a24656ca26@mail.gmail.com> 2010/1/29 BillK : > On 1/29/10, Emlyn wrote: >> ... for something of very little consequence. >> >> ?I use an RSS to email gateway for reading newsfeeds. A couple of days >> ?ago the free service I use went to non-free. Crap. I figured "it can't >> ?be hard to make one of these", and wrote one. >> >> > Maybe I'm being extra geekly, but why on earth would you want to read > the RSS firehose in your email? ?Don't you get enough email as it is? > :) That's a really fair criticism. I tried newsreaders a few times, and just couldn't get into them, i'd just stop looking at them after a while. I've found for years that I was a one messaging app person, and that app is gmail; if it doesn't hit my gmail it doesn't exist. Even facebook, of which I am an embarrassingly heavy user, is only really sticky for me because of email notifications. I use twitter a bit via a firefox add-on, but not much; it wont play well with email, and so I'm not a big fan. It's a sign I'm getting old I guess, I'm getting stuck on old paradigms (although it's not email, it's gmail that I like). But, I think I'm not the only email centric person around, even on this list. Maybe one day google wave will dislodge me from my anachronisms? Just now though, it's still email for me. (btw you should see my crazy filter list :-) ) -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From pharos at gmail.com Fri Jan 29 14:45:29 2010 From: pharos at gmail.com (BillK) Date: Fri, 29 Jan 2010 14:45:29 +0000 Subject: [ExI] geeky people, I need a couple of alpha testers... In-Reply-To: <710b78fc1001290615v16e374ei1a6f20a24656ca26@mail.gmail.com> References: <710b78fc1001281711n576e2ee9k186683d2ca0298ca@mail.gmail.com> <710b78fc1001290615v16e374ei1a6f20a24656ca26@mail.gmail.com> Message-ID: On 1/29/10, Emlyn wrote: > That's a really fair criticism. I tried newsreaders a few times, and > just couldn't get into them, i'd just stop looking at them after a > while. I've found for years that I was a one messaging app person, and > that app is gmail; if it doesn't hit my gmail it doesn't exist. Even > facebook, of which I am an embarrassingly heavy user, is only really > sticky for me because of email notifications. I use twitter a bit via > a firefox add-on, but not much; it wont play well with email, and so > I'm not a big fan. > > Maybe one day google wave will dislodge me from my anachronisms? Just > now though, it's still email for me. > > (btw you should see my crazy filter list :-) ) > > Well, if you're crazy about Gmail, then I think you should give Google Reader another try. You can setup reader folders like News, Financial, Tech, Tech News, Science, etc. and allocate each RSS feed to be put into the appropriate folder. Google Reader will also suggest more feeds that you might like, depending on your subscriptions. The main difference I see is that Gmail is usually stuff that *must* be read, whereas on Reader you can stack up thousands of feeds to read at your leisure and you know nobody there is waiting on an urgent response. Best wishes, BillK From msd001 at gmail.com Fri Jan 29 14:52:18 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 29 Jan 2010 09:52:18 -0500 Subject: [ExI] predictive neurons? In-Reply-To: References: <62c14241001281809n3cac83edy9a885b6f9f479758@mail.gmail.com> Message-ID: <62c14241001290652w492f5400u4bebe7705bd5480b@mail.gmail.com> On Thu, Jan 28, 2010 at 9:32 PM, Spencer Campbell wrote: > Nice catch, Mike. And, is that a preliminary hypothesis toward > explaining precognition that I see you forming there? yes. maybe not "explaining" as much as "reserving possibility" I've been reading about genetic algorithms as a computational method of problem solving. What I find really fascinating is that each phenotype has almost no intelligence at all - the eventual 'solution' is simply the luckiest of the unintelligent phenotypes that is discovered to answer the fitness function better than any of its siblings or ancestors. It's not so much that nature necessarily has a use for intelligence, because it mostly employs brute force over a long time. If the process of natural selection has any primary goal, we probably won't know what it is - however, an obvious subgoal is survival (and propagation). So despite the relative rarity of life in the universe, where is is known to exist it flourishes in a surprising range of scenarios. If there is any evolved (discovered by dumb-luck) predictive or anticipatory ability in humans, it seems to me that there would potentially be some chance for it to manifest in apparently novel situations. If a better predictive modeling ability (precog) confers any advantage to those who have it, then eventually we might all be using it. Arguably, mathematical modeling as offloaded to high performance computers is an extension of our ability to rationally solve problems - then yes, we already are predicting behaviors with high degree of success and accuracy. "People who bought ABC also bought DEF" is an automated hypothesis that tests the correlation between products and behaviors of people who buy them. This kind of analysis is being done everywhere, every day, all day. (ex: http://thenumerati.net/index.cfm?catID=18) From jameschoate at austin.rr.com Fri Jan 29 15:10:00 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Fri, 29 Jan 2010 15:10:00 +0000 Subject: [ExI] predictive neurons? In-Reply-To: <62c14241001290652w492f5400u4bebe7705bd5480b@mail.gmail.com> Message-ID: <20100129151000.CEDU6.447546.root@hrndva-web09-z02> You miss the very point here, dancing around it like a dirvish... The 'intelligence' you speak of -is- the fitness function. ---- Mike Dougherty wrote: > I've been reading about genetic algorithms as a computational method > of problem solving. What I find really fascinating is that each > phenotype has almost no intelligence at all - the eventual 'solution' > is simply the luckiest of the unintelligent phenotypes that is > discovered to answer the fitness function better than any of its > siblings or ancestors. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From gts_2000 at yahoo.com Fri Jan 29 15:36:07 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 07:36:07 -0800 (PST) Subject: [ExI] The Robot Reply to the CRA In-Reply-To: Message-ID: <100906.9049.qm@web36503.mail.mud.yahoo.com> > > Before I answer this message, Stathis... > > > > I picked up a copy of _Rediscovery of Mind_. Chalmers > quoted Searle out of context, just as I suspected. Searle > considers three different versions of the silicon brain > thought experiment and does not endorse of any of them. In > the first version the chip-brain works fine (the extropian > pipe dream), in the second the relationship between mind and > behavior is broken (Chalmers quotes a part of this) and in > the third version the patient retains a mental life but > becomes paralyzed. > > > > Just three sentences before the Chalmers quote, Searle > adamantly rejects the notion that chips can duplicate the > causal powers of neurons. He explains the different versions > of the thought experiment only by way of bringing attention > to some important ideas in the philosophy of mind. More > later. > > I don't have access to the book and would be interested in > the actual quote. Would you like me to scan the relevent pages and email them to you? -gts From gts_2000 at yahoo.com Fri Jan 29 15:56:08 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 07:56:08 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <176490.99403.qm@web36504.mail.mud.yahoo.com> --- On Fri, 1/29/10, Stathis Papaioannou wrote: >> You lost me there. Either DWAP has conscious >> understanding of W (in which case it 'has semantics'), or >> else DWAP does not have conscious understanding of W. > > It depends on whether DWAP is actually capable of natural > language. I explained DWAP's capabilities, but I will again: The human operator enters a word. DWAP assigns that word to variable W and then looks up its definition and assigns that definition to D. DWAP makes the association W=D, and then looks up the definition of each word in D and assigns those definitions to those words in the same way, and then does the same with those words, and so on and so on and so on until it exhausts all possible English words associated with W. To make it more interesting, let us say that DWAP runs this algorithm on every word in the complete English dictionary. Let us say also that DWAP holds all those hundreds of millions of associations live in its massive RAM storage (some people like to equate RAM to conscious mind, and so I humor them). Is the following sentence true? Or is it false? DWAP has conscious understanding of the meanings of English words. -gts From jonkc at bellsouth.net Fri Jan 29 16:22:25 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 29 Jan 2010 11:22:25 -0500 Subject: [ExI] The digital nature of brains. In-Reply-To: <701054.58414.qm@web36508.mail.mud.yahoo.com> References: <701054.58414.qm@web36508.mail.mud.yahoo.com> Message-ID: <4E44D5EB-5732-43B2-B513-2EA4F9352063@bellsouth.net> Me: > >> If syntax is not only insufficient for meaning but it's not even part of meaning then what good is it? > Gordon Swobe: > Programs use syntactic rules to manipulate symbols. They look at the forms of symbols, not at their meanings Well that's all very well, but you still haven't answered my question. I don't know it for a fact but presumably you are not a computer program, so why do YOU bother to read syntactic symbols as they are not only insufficient for meaning they are not even part of meaning? An even better question is why do you bother to produce syntactic symbols as you have done in such abundance on this list in the last couple of months when you must know there is absolutely no chance of any of us getting any meaning out of them? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Jan 29 16:44:10 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 08:44:10 -0800 (PST) Subject: [ExI] The digital nature of brains. In-Reply-To: <4E44D5EB-5732-43B2-B513-2EA4F9352063@bellsouth.net> Message-ID: <391208.96649.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/29/10, John Clark wrote: > Well that's all very well, but you still > haven't answered my question. I don't know it for a > fact but presumably you are not a computer program I'll take that as a compliment. >, so why > do YOU bother to read syntactic symbols as they are not only > insufficient for meaning they are not even part of meaning? English symbols have meaning to me just as they do you. For this reason our brains do not compare well with digital computers. We program digital computers to look only at the forms of symbols whereas natural selection engineered systems like you and me that can look not only at their forms but also at their meanings. > An even better question is why do you bother to produce > syntactic symbols as you have done in such abundance on this > list The adjective "syntactic" describes the form-based nature of the rules for manipulating symbols as found in computer software. The word does not apply to the symbols themselves. -gts From lacertilian at gmail.com Fri Jan 29 17:03:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 29 Jan 2010 09:03:01 -0800 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <176490.99403.qm@web36504.mail.mud.yahoo.com> References: <176490.99403.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > I explained DWAP's capabilities, but I will again: > (a bunch of stuff we already knew) As far as I can tell, the only thing DWAP does is make associations; it does not use those associations to form sentences in order to communicate. This might be deliberate. Are you modelling a human passively listening to, say, a radio tuned to some awful talk show or another? If so, my answer is: it cannot be determined whether or not DWAP has conscious understanding of the meanings of English words. It's noteworthy that if one replaces "DWAP" with "a silent and still human listener" in the prior statement, its truth value is unchanged. From jonkc at bellsouth.net Fri Jan 29 18:43:06 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 29 Jan 2010 13:43:06 -0500 Subject: [ExI] Understanding is useless (was: The digital nature of brains) In-Reply-To: <391208.96649.qm@web36505.mail.mud.yahoo.com> References: <391208.96649.qm@web36505.mail.mud.yahoo.com> Message-ID: <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net> On Jan 29, 2010, Gordon Swobe wrote: >> I don't know it for a fact but presumably you are not a computer program > > I'll take that as a compliment. As well you should. > English symbols have meaning to me just as they do you. Hey speak for yourself. I just input an ASCII sequence, process it syntactically, and then output a different ASCII sequence. The fact that I have no knowledge of the meaning of one bit of it I have never found the least bit inconvenient, as meaning never actually does anything so you can get along just fine without it. For example, the Turing Test is completely uninterested in meaning as is Evolution, and yet it managed to produce the human mind, so it's not much of a stretch to imagine somebody could write a good post without having any idea of what it means. Yes yes I know, I'm setting myself up perfectly for the retort "Haw, I always knew you didn't know what you were talking about", and it's true I don't have a clue what I'm talking about, but I don't consider that an insult. The fact that I'm lacking a fifth wheel called "understanding" has never been the slightest handicap to me, I can still produce a pretty good ASCII sequence. There is no concept more useless than meaning. > natural selection engineered systems like you and me that can look not only at their forms but also at their meanings. How does natural selection do that as meaning has absolutely no effect on behavior? That is after all why you think the Turing Test can't detect consciousness. > The adjective "syntactic" describes the form-based nature of the rules for manipulating symbols as found in computer software. The word does not apply to the symbols themselves. But in the Chinese Room you kept telling us that the shelf full of books that is too large to fit in the observable universe contains nothing but syntactic symbols without one bit of meaning in the entire lot, and now you're telling us that may not be true, it depends on who or what is reading those books. Perhaps thats why you included the words "in computer software" above, but then you are once again just stating what you're trying to prove in your thought experiment. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Jan 29 19:00:56 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 11:00:56 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <584374.10388.qm@web36504.mail.mud.yahoo.com> --- On Fri, 1/29/10, Spencer Campbell wrote: > As far as I can tell, the only thing DWAP does is make > associations; it does not use those associations to form > sentences in order to communicate. This might be deliberate. Yes. I invented DWAP precisely because some people here had suggested that the association of symbols by computers somehow solves the symbol grounding problem or "gives meaning" to symbols. If that were so then DWAP should have conscious and detailed understanding of the meaning of every English word. But DWAP has no more understanding of words than does the digital dictionary it references, and the digital dictionary has no more understanding of the words than does the paper dictionary from which it was made. DWAP cannot understand words for the same reason that a piece of paper cannot understand the words you write on it. To your point: we can if you like add to the program the capability to converse in seemingly meaningful ways such that it passes the Turing test. It will then appear to understand words without actually understanding them. -gts From eric at m056832107.syzygy.com Fri Jan 29 19:26:46 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 29 Jan 2010 19:26:46 -0000 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <584374.10388.qm@web36504.mail.mud.yahoo.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> Message-ID: <20100129192646.5.qmail@syzygy.com> Gordon writes: >Yes. I invented DWAP precisely because some people here had suggested > that the association of symbols by computers somehow solves the > symbol grounding problem or "gives meaning" to symbols. You're conflating symbols and words. Meaning is attached to word symbols when the word symbols are associated with sense symbols, not with other word symbols. I made this clear several times already, but you deny it, and stamp your feet and say "no, no, no, it just can't be!" At this point, you must be trying very hard to fail to understand these simple concepts. You continue to assert things which have been shown by multiple independent lines of argument to be counter-factual. You claim that simple rational arguments are "religious idiocy". Why are you so attached to your beliefs? Can't you let go of the need to be "special"? That's what you've been saying all along: "I'm special because I understand, I am conscious. No computer could ever do that, so I'm superior." Step back and look at what you're saying, and how you're saying it. Show some real understanding. Be brave. Question your own assumptions. -eric From gts_2000 at yahoo.com Fri Jan 29 19:25:57 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 11:25:57 -0800 (PST) Subject: [ExI] Understanding is useless In-Reply-To: <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net> Message-ID: <165704.91501.qm@web36502.mail.mud.yahoo.com> --- On Fri, 1/29/10, John Clark wrote: > Yes yes I know, I'm setting myself up > perfectly for the retort "Haw, I always knew you > didn't know what you were talking about", and > it's true I don't have a clue what I'm talking > about, but I don't consider that an insult. I think you should consider it an insult. Funny, I think, that people will denigrate themselves in such ways to protect the supposed reputations of their supposed conscious computers. Some people here might even call me a chauvinist of sorts for daring to claim that computers don't understand their own words. I suppose typewriters and cell phones should have civil rights too. -gts From spike66 at att.net Fri Jan 29 21:29:40 2010 From: spike66 at att.net (spike) Date: Fri, 29 Jan 2010 13:29:40 -0800 Subject: [ExI] cruel beast of capitalsim: was RE: Humans caused Aussie megafauna extinction In-Reply-To: <4B6275AF.3030307@satx.rr.com> References: <4B6275AF.3030307@satx.rr.com> Message-ID: <368968B455C245C7B815B718D67793E0@spike> The global warming crowd has gained a powerful ally: http://www.foxnews.com/story/0,2933,584249,00.html?test=latestnews He surely wasn't describing me with the "cruel beast of capitalism" bit. I am a nice beast of capitalism, a nearly vegetarian beast of capitalism, and not a warmonger at all. I have never monged a war. He is the one who is monging war. Actually I think this tape is probably fake. It doesn't fit with the whole macho blood and guts Bin Laden. What do you think? spike From gts_2000 at yahoo.com Fri Jan 29 21:41:04 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 13:41:04 -0800 (PST) Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <20100129192646.5.qmail@syzygy.com> Message-ID: <488671.6529.qm@web36503.mail.mud.yahoo.com> --- On Fri, 1/29/10, Eric Messick wrote: > You're conflating symbols and words. Words are symbols, Eric. Not all symbols are words but all words are symbols. > Meaning is attached to word symbols when the word symbols > are associated with sense symbols, not with other word > symbols. In this thread I was addressing the claim that the symbol grounding problem is solved when computers associate symbols. -gts From jrd1415 at gmail.com Fri Jan 29 22:04:41 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 29 Jan 2010 15:04:41 -0700 Subject: [ExI] cruel beast of capitalsim: was RE: Humans caused Aussie megafauna extinction In-Reply-To: <368968B455C245C7B815B718D67793E0@spike> References: <4B6275AF.3030307@satx.rr.com> <368968B455C245C7B815B718D67793E0@spike> Message-ID: On Fri, Jan 29, 2010 at 2:29 PM, spike wrote: > Actually I think this tape is probably fake. ?It doesn't fit with the > whole macho blood and guts Bin Laden. Kool-aid drinker's mistake # 4,826: "macho blood and guts Bin Laden." "Macho blood and guts" is an American thing. Attributing it to bin Laden is pure projection. Bin Laden is a gentle, thoughtful, pious, freedom fighter. Difficult to grasp of course, for anyone sealed inside the "American Exceptionalism" narrative bubble. Best, jeff davis "The West won the world not by the superiority of its ideas or values or religion but rather by its superiority in applying organized violence. Westerners often forget this fact, non-Westerners never do." - Samuel P. Huntington From eric at m056832107.syzygy.com Fri Jan 29 22:40:52 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 29 Jan 2010 22:40:52 -0000 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <488671.6529.qm@web36503.mail.mud.yahoo.com> References: <20100129192646.5.qmail@syzygy.com> <488671.6529.qm@web36503.mail.mud.yahoo.com> Message-ID: <20100129224052.5.qmail@syzygy.com> Gordon writes: >--- On Fri, 1/29/10, Eric Messick wrote: >> You're conflating symbols and words. > >Words are symbols, Eric. Not all symbols are words but all words are symbols. Yes, what we are talking about here are symbols which are not words. >> Meaning is attached to word symbols when the word symbols >> are associated with sense symbols, not with other word >> symbols. > >In this thread I was addressing the claim that the symbol grounding > problem is solved when computers associate symbols. Yes, you showed that associating a bunch of word symbols with each other doesn't result in meaning. You attacked a straw man. No claim was ever made that associating words with words would create meaning. The claim is that sense symbols must be involved. Now stop whacking at that straw man. Perhaps we should play a game of "spot the fallacy". Here we have "attacking a straw man". You do a very good job with "begging the question", by assuming that syntax can't produce semantics and deriving that computers can't understand. You slip in the "masked man fallacy" with "I know I understand things, I don't know how computers could understand things, so computers can't understand things the way I do." You confuse levels of abstraction by pointing out that computers running weather simulations don't get wet. You try to engage intuition about things of vastly different scales when you try to equate the inability of one thing to accomplish a particular task with a supposed inability for 100 billion interacting things to accomplish the task. You also repeat these fallacies after others have pointed them out to you. I've been trying to figure out what is driving you to do this, but you don't seem willing to back up and look at the whole thing from a higher level. Considering your demonstrated inability to handle levels of abstraction, perhaps you are unable to "go meta" on this. Does this problem afflict you in other areas? -eric From gts_2000 at yahoo.com Fri Jan 29 23:00:37 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 15:00:37 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: <584374.10388.qm@web36504.mail.mud.yahoo.com> Message-ID: <298084.26753.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/29/10, Eric Messick wrote: > Yes, what we are talking about here are symbols which are > not words. I created a different thread to discuss what happens when we equip our digital computer with sensors that correlate sense data to word-symbols. If you were actually paying attention to the debate then you would know about it. -gts From msd001 at gmail.com Fri Jan 29 23:19:15 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 29 Jan 2010 18:19:15 -0500 Subject: [ExI] predictive neurons? In-Reply-To: <20100129151000.CEDU6.447546.root@hrndva-web09-z02> References: <62c14241001290652w492f5400u4bebe7705bd5480b@mail.gmail.com> <20100129151000.CEDU6.447546.root@hrndva-web09-z02> Message-ID: <62c14241001291519v6283aec8n6f7285e163dbc7ea@mail.gmail.com> On Fri, Jan 29, 2010 at 10:10 AM, wrote: > You miss the very point here, dancing around it like a dirvish... > > The 'intelligence' you speak of -is- the fitness function. So is the fitness function a result of the environment? I think calling intelligence a feature or property of the environment makes it more like the situation where gravity is the deformation of spacetime topology (or vice-versa) - so how might we improve intelligence? Alter the environment? Is it a recursive process? From gts_2000 at yahoo.com Fri Jan 29 23:30:22 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 15:30:22 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <584374.10388.qm@web36504.mail.mud.yahoo.com> Message-ID: <234528.39818.qm@web36505.mail.mud.yahoo.com> Eric, If you have a genuine interest in this subject and want to engage me in intelligent discussion then please carefully read the target article: MINDS, BRAINS, AND PROGRAMS http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html Stathis, I hope you've already read it. -gts From jrd1415 at gmail.com Fri Jan 29 23:39:37 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 29 Jan 2010 16:39:37 -0700 Subject: [ExI] Understanding is useless In-Reply-To: <165704.91501.qm@web36502.mail.mud.yahoo.com> References: <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net> <165704.91501.qm@web36502.mail.mud.yahoo.com> Message-ID: On Fri, Jan 29, 2010 at 12:25 PM, Gordon Swobe wrote: > --- On Fri, 1/29/10, John Clark wrote: >> it's true I don't have a clue what I'm talking >> about, but I don't consider that an insult. Congratulations, John. How many people feel compelled to know, to be right, and to have and give the correct answer to whatever question may be posed? How many feel embarrassed -- even ashamed -- at being publicly seen as not knowing, or -- horrors! -- being found to be wrong? Please, show me to the company of those who easily, even enthusiastically declare, "I don't know." for these three little words are the beginning of wisdom. >... I suppose typewriters and cell phones should have civil rights too. Once they become smart enough to assert that they deserve them. Of course the real problem is that once they become just a little smarter than that, say smart enough to understand the concept of payback, they may just come to wonder why primitive, irrational, impulse-driven biological pseudo-intelligences with a lousy track record should control who gets and who doesn't get "civil rights". I limited my participation in the Syntax macht nicht Semantics, and follow on discussions, when it became a Mexican standoff. Gordon would not be dissuaded from his position without empirical evidence to counter his view, and the required evidence -- a materialist, non-magical neurological explanation for consciousness, intelligence, intentionality, mind, etc --does not yet exist. The problem is the human inability to accept the materialist reality, because materialism inevitably contravenes all religious conjurations, and ends with the nullification of any notion of human specialness. I killed a tiny winged insect the other night. Tiny -- maybe a couple of millimeters long -- and perfect in every detail -- legs, wings,... the whole deal. I pressed down with my finger, and the "miracle" of life, of evolution, of a DNA(indistinguishable from human DNA) -driven program, became a grey milligram smear of dust. And I thought to myself "No difference between that tiny smear of dust and me, except of course that I was still alive." So, sorry Gordon, but it's all -- we're all -- just stardust in the petri dish. No special juju. Unique ***configurations*** of stardust which exploit unique details of physics and chemistry. But no juju. Rust, tumbleweeds, consciuosness, cow patties,... it's all the same. When the particular details are adequately understood, and applied, then your toaster will probably have something to say about civil rights. I find it liberating, but ymmv. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From emlynoregan at gmail.com Sat Jan 30 00:01:18 2010 From: emlynoregan at gmail.com (Emlyn) Date: Sat, 30 Jan 2010 10:31:18 +1030 Subject: [ExI] geeky people, I need a couple of alpha testers... In-Reply-To: References: <710b78fc1001281711n576e2ee9k186683d2ca0298ca@mail.gmail.com> <710b78fc1001290615v16e374ei1a6f20a24656ca26@mail.gmail.com> Message-ID: <710b78fc1001291601v2bed759cqacc5d0c00f91593c@mail.gmail.com> 2010/1/30 BillK : > On 1/29/10, Emlyn ?wrote: >> That's a really fair criticism. I tried newsreaders a few times, and >> ?just couldn't get into them, i'd just stop looking at them after a >> ?while. I've found for years that I was a one messaging app person, and >> ?that app is gmail; if it doesn't hit my gmail it doesn't exist. Even >> ?facebook, of which I am an embarrassingly heavy user, is only really >> ?sticky for me because of email notifications. I use twitter a bit via >> ?a firefox add-on, but not much; it wont play well with email, and so >> ?I'm not a big fan. >> >> ?Maybe one day google wave will dislodge me from my anachronisms? Just >> ?now though, it's still email for me. >> >> ?(btw you should see my crazy filter list :-) ) >> >> > > > Well, if you're crazy about Gmail, then I think you should give Google > Reader another try. > > You can setup reader folders like News, Financial, Tech, Tech News, > Science, etc. and allocate each RSS feed to be put into the > appropriate folder. > Google Reader will also suggest more feeds that you might like, > depending on your subscriptions. I probably should give it another go. Honestly, I think I'd do better with a firefox add-on which would alert me of new stuff turning up. > > The main difference I see is that Gmail is usually stuff that *must* > be read, whereas on Reader you can stack up thousands of feeds to read > at your leisure and you know nobody there is waiting on an urgent > response. > I already have this in gmail (different tags for different types of stuff). Newsfeeds never appear in my inbox. -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From pharos at gmail.com Sat Jan 30 00:11:25 2010 From: pharos at gmail.com (BillK) Date: Sat, 30 Jan 2010 00:11:25 +0000 Subject: [ExI] Understanding is useless In-Reply-To: References: <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net> <165704.91501.qm@web36502.mail.mud.yahoo.com> Message-ID: On 1/29/10, Jeff Davis wrote: > I limited my participation in the Syntax macht nicht Semantics, and > follow on discussions, when it became a Mexican standoff. Gordon > would not be dissuaded from his position without empirical evidence to > counter his view, and the required evidence -- a materialist, > non-magical neurological explanation for consciousness, intelligence, > intentionality, mind, etc --does not yet exist. > Yes, it is Gordon saying "Tis!" and everyone else saying "Tisn't!" > > So, sorry Gordon, but it's all -- we're all -- just stardust in the > petri dish. No special juju. Unique ***configurations*** of stardust > which exploit unique details of physics and chemistry. But no juju. > Rust, tumbleweeds, consciuosness, cow patties,... it's all the same. > When the particular details are adequately understood, and applied, > then your toaster will probably have something to say about civil > rights. I find it liberating, but ymmv. > > I agree, but it makes me angry when the life force switches off. All that experience, knowledge, friendship,....... just gone. There should be a law against it. BillK From gts_2000 at yahoo.com Sat Jan 30 00:36:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 16:36:36 -0800 (PST) Subject: [ExI] Understanding is useless In-Reply-To: Message-ID: <738552.76449.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/29/10, Jeff Davis wrote: > Gordon would not be dissuaded from his position without empirical > evidence to counter his view, and the required evidence -- a > materialist, non-magical neurological explanation for consciousness, > intelligence, intentionality, mind, etc --does not yet exist. You still don't understand me, Jeff. I argue here for exactly such an explanation. The mystics -- the people who believe in magic -- are those who believe we can separate mental phenomena from the neurological material that causes that phenomena, as if the mind exists like software running on hardware. -gts From thespike at satx.rr.com Sat Jan 30 01:01:20 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 29 Jan 2010 19:01:20 -0600 Subject: [ExI] Understanding is useless In-Reply-To: <738552.76449.qm@web36505.mail.mud.yahoo.com> References: <738552.76449.qm@web36505.mail.mud.yahoo.com> Message-ID: <4B6384E0.3080205@satx.rr.com> On 1/29/2010 6:36 PM, Gordon Swobe wrote: > Jeff Davis wrote: >> > Gordon would not be dissuaded from his position without empirical >> > evidence to counter his view, and the required evidence -- a >> > materialist, non-magical neurological explanation for consciousness, >> > intelligence, intentionality, mind, etc --does not yet exist. > You still don't understand me, Jeff. I argue here for exactly such an explanation. I understand that, Gordon, but I don't understand what kind of explanation would satisfy you? What class of operator do you suppose might fit into the "here a miracle occurs" slot? Or are you (and maybe Searle) saying, "Dunno. It's very puzzling, just as solar output was prior to the discovery of nuclear burning." But if so, even that example was not a change in kind, not really--you seem to be looking for an *ontological* novelty. Which is probably why most other posters think you're grasping at souls. Damien Broderick From stathisp at gmail.com Sat Jan 30 01:08:31 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 30 Jan 2010 12:08:31 +1100 Subject: [ExI] Understanding is useless (was: The digital nature of brains) In-Reply-To: <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net> References: <391208.96649.qm@web36505.mail.mud.yahoo.com> <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net> Message-ID: 2010/1/30 John Clark : > Hey speak for yourself. I just input an ASCII sequence, process it > syntactically, and then output a different ASCII sequence. The fact that I > have no knowledge of the meaning of one bit of it I have never found the > least bit inconvenient, as meaning never actually does anything so you can > get along just fine without it. For example, the Turing Test is completely > uninterested in meaning as is?Evolution, and yet it managed to produce the > human mind, so it's not much of a stretch to imagine somebody could write a > good post without having any idea of what it means. > Yes yes I know, I'm setting myself up perfectly for the retort "Haw, I > always knew you didn't know what you were talking about", and it's true I > don't have a clue what I'm talking about, but I don't consider that an > insult. The fact that I'm lacking a fifth wheel called "understanding" has > never been the slightest handicap to me, I can still produce a pretty good > ASCII sequence.?There is no concept more useless than meaning. I think what you're saying is that meaning is nothing *over and above* the ability to use words appropriately, just as walking is nothing over and above putting one foot before the other in a coordinated fashion. -- Stathis Papaioannou From steinberg.will at gmail.com Sat Jan 30 01:13:46 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 29 Jan 2010 20:13:46 -0500 Subject: [ExI] Understanding is useless In-Reply-To: <4B6384E0.3080205@satx.rr.com> References: <738552.76449.qm@web36505.mail.mud.yahoo.com> <4B6384E0.3080205@satx.rr.com> Message-ID: <4e3a29501001291713v1210856fl81b8e2415854ece4@mail.gmail.com> > > materialism inevitably contravenes all religious conjurations, > and ends with the nullification of any notion of human specialness. > Not necessarily. Materialism (especially given shaky quantum dimensionality/causality) certainly does not preclude specialness. People seem so ready to dispose of the human "special" that they forget a true, quantifiable uniqueness about the universe. But ours lies is in the ability to cause change. I don't think one would argue that a button rigged to destroy the universe has the same "significance" as one rigged to nothing, not thinking in terms of destructiveness (a human term) but in its ability to radically displace and change aspects of systems. The fact that we humans have gone from shitting around in caves to being able to destroy our own planet in just a few thousand years presupposes remarkable progress for the future, barring any catastrophic events. Humans have high causal potential, marked by one coherent system's ability to cause intersystemic changes. A particle, though perhaps able to CAUSE a significant event, is a meaningless system devoid of intentionality and thus with a lower chance of having non-negligible effects. A human can cause change and still remain, for the most part, an identical yet equally complicated being, allowing for more changes over a longer period of time. Life's poor player may inspire many audience members to some sort of action, even if he is only a candle's flicker in the immutable darkness of the universe...eh? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Jan 30 01:40:55 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 30 Jan 2010 12:40:55 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <176490.99403.qm@web36504.mail.mud.yahoo.com> References: <176490.99403.qm@web36504.mail.mud.yahoo.com> Message-ID: On 30 January 2010 02:56, Gordon Swobe wrote: > --- On Fri, 1/29/10, Stathis Papaioannou wrote: > >>> You lost me there. Either DWAP has conscious >>> understanding of W (in which case it 'has semantics'), or >>> else DWAP does not have conscious understanding of W. >> >> It depends on whether DWAP is actually capable of natural >> language. > > I explained DWAP's capabilities, but I will again: > > The human operator enters a word. DWAP assigns that word to variable W and then looks up its definition and assigns that definition to D. DWAP makes the association W=D, and then looks up the definition of each word in D and assigns those definitions to those words in the same way, and then does the same with those words, and so on and so on and so on until it exhausts all possible English words associated with W. To make it more interesting, let us say that DWAP runs this algorithm on every word in the complete English dictionary. Let us say also that DWAP holds all those hundreds of millions of associations live in its massive RAM storage (some people like to equate RAM to conscious mind, and so I humor them). > > Is the following sentence true? Or is it false? > > DWAP has conscious understanding of the meanings of English words. It's false. What you describe by itself won't allow DWAP to use words in a meaningful way. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Jan 30 02:23:24 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 29 Jan 2010 18:23:24 -0800 (PST) Subject: [ExI] Understanding is useless In-Reply-To: <4B6384E0.3080205@satx.rr.com> Message-ID: <319620.34293.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/29/10, Damien Broderick wrote: > but I don't understand what kind of explanation would satisfy you? I don't understand your question. Explanation for what? I don't find it especially troubling that neuroscience has yet to elucidate the mechanism of consciousness, if that's what you mean. > you seem to be looking for an *ontological* novelty. Which is probably > why most other posters think you're grasping at souls. Glad to see that you understand I have no interest in positing the existence of anything non-material. I don't consider myself as "looking for ontological novelty". Just killing some spare time while trying to clear up what I see as some misconceptions about strong AI and the philosophy of mind. -gts From msd001 at gmail.com Sat Jan 30 03:16:20 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 29 Jan 2010 22:16:20 -0500 Subject: [ExI] Understanding is useless In-Reply-To: <319620.34293.qm@web36505.mail.mud.yahoo.com> References: <4B6384E0.3080205@satx.rr.com> <319620.34293.qm@web36505.mail.mud.yahoo.com> Message-ID: <62c14241001291916g3663c1dega926e8e12f9964ed@mail.gmail.com> On Fri, Jan 29, 2010 at 9:23 PM, Gordon Swobe wrote: > I don't consider myself as "looking for ontological novelty". Just killing some spare time while trying to clear up what I see as some misconceptions about strong AI and the philosophy of mind. You don't see how it's even a little bit presumptuous that you are able to "clear up" supposed misconceptions in two disciplines by "just killing some spare time" ? What IS amazing to me is that I work to clarify my thinking before hitting send and get one or two responses in a week. You assert that you're a super genius who has it all worked out and the blowback from many very smart (and patient) people showing you how and why you are uninformed on a body of knowledge fills my inbox every day. I am only feeding the troll behavior today because I felt guilty for intentionally archiving without reading any thread in which you are participating. Then the one email I read has that line above. Really incredible.... From spike66 at att.net Sat Jan 30 03:21:10 2010 From: spike66 at att.net (spike) Date: Fri, 29 Jan 2010 19:21:10 -0800 Subject: [ExI] cruel beast of capitalsim: was RE: Humans caused Aussiemegafauna extinction In-Reply-To: References: <4B6275AF.3030307@satx.rr.com><368968B455C245C7B815B718D67793E0@spike> Message-ID: <584C357785DB43FF90B855E900AFC949@spike> > ...On Behalf Of Jeff Davis > ... > ...Bin Laden is a gentle, thoughtful, pious, freedom fighter... > Best, jeff davis Freedom fighter? Freedom from tall buildings? Freedom from innocent American citizens? Oy vey Jeff, on this topic do consider your words carefully sir. spike From eric at m056832107.syzygy.com Sat Jan 30 03:54:08 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 30 Jan 2010 03:54:08 -0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <234528.39818.qm@web36505.mail.mud.yahoo.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <234528.39818.qm@web36505.mail.mud.yahoo.com> Message-ID: <20100130035408.5.qmail@syzygy.com> Gordon writes: >If you have a genuine interest in this subject and want to engage me >in intelligent discussion then please carefully read the target >article: > >MINDS, BRAINS, AND PROGRAMS >http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html Ok, I just went and read the whole thing. I think we've pretty well covered everything in there numerous times in this discussion already. I'll note a few things, though. Early on, Searle characterizes weak and strong AI, saying in effect that weak AI attempts to study human cognition, while strong AI attempts to duplicate it. Searle: But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. He then starts in on the whole Chinese room thing, which makes up the bulk of the paper. Here's an interesting bit: To me, Chinese writing is just so many meaningless squiggles. Here, the lack of understanding is only relative to him. Later he will assert that a system which answers non-obvious questions about Chinese stories also has no meaning for the symbols. It looks like he's extrapolated meaninglessness beyond its applicability. [...] in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing. Well, nothing except vast quantities of information about Chinese language sufficient to answer questions as well as a native speaker. He seems to consider this a trivial detail. On the possibility that understanding is "more symbol manipulation": I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example. Searle is acknowledging that his argument is weak. [...] what is suggested though certainly not demonstrated -- by the example is that the computer program is simply irrelevant to my understanding of the story. Again, not demonstrated. Good thing too, since it's the computer program that is doing *all* of the understanding in the example. [...] whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. Here he's simply asserting what he's trying to show. The human is a trivial component of the system, so its lack of understanding does not impair the system's understanding. My car and my adding machine, on the other hand, understand nothing[.] Once again, he's simply asserting something. Again: The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero. Why shouldn't it be partial? Searle just asserts that it is zero. On to the systems critique: Whereas the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese subsystem knows only that "squiggle squiggle" is followed by "squoggle squoggle." Here Searle makes a level of abstraction error. The symbols may not mean anything to the human, but they certainly mean something to the system, or it wouldn't be able to answer questions about them, as we are told it can. Indeed, in the case as described, the Chinese subsystem is simply a part of the English subsystem, a part that engages in meaningless symbol manipulation according to rules in English. The symbols are meaningless because they are meaningless to *Searle*, not because they would be meaningless to the Chinese speaking system as a whole. But the whole point of the examples has been to try to show that that couldn't be sufficient for understanding, in the sense in which I understand stories in English, because a person, and hence the set of systems that go to make up a person, could have the right combination of input, output, and program and still not understand anything in the relevant literal sense in which I understand English. Level of abstraction error again: because the human does not understand, the vastly greater system which it is a part of must not understand either. In short, the systems reply simply begs the question by insisting without argument that the system must understand Chinese. Looks to me like Searle is projecting a bit of begging the question onto his criticizers. Searle states as part of the problem that the system behaves as though it understands Chinese as well as a native speaker. He then repeatedly assumes that the system does not understand, and concludes that it does not understand. The systems critique can be stated without an assumption of understanding: If there is understanding, then it can reside outside of the human component. Which is still enough to devastate most of Searle's claims, as he's always relying on his statement that the human doesn't understand to support the notion that there is no understanding. It is, by the way, not an answer to this point to say that the Chinese system has information as input and output and the stomach has food and food products as input and output, since from the point of view of the agent, from my point of view, there is no information in either the food or the Chinese -- the Chinese is just so many meaningless squiggles. Searle claims here that there is no information in the meaningless (to him) Chinese symbols. I'm going to invoke Shannon here. Even without any meaning, the Chinese symbols are different from each other, and so they hold information. Searle can tell the difference between different characters without knowing what they mean. It's just plain wrong to say that there is no information. It is an important mistake too, as this whole thing revolves around the notion of information processing. Such a basic mistake does not give me confidence in Searle's conclusions about these matters. Repeated unsubstantiated assertions don't help either. It is not the aim of this article to argue against McCarthy's point, so I will simply assert the following without argument. Except that it is the aim of the article to argue against McCarthy's point. At least here he's acknowledging his unsubstantiated assertion. On to the robot criticism: Now in this case I want to say that the robot has no intentional states at all; Searle *wants* the robot not to be intentional. He's attached to the outcome, and it motivates his unsubstantiated assertion. On to brain simulators: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. That's not how he defined strong AI above. Just write a program with understanding, no need for ignorance about brain function. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. Again, asserted without support. If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would attribute intentionality to it, pending some reason not to. We wouldn't need to know in advance that its computer brain was a formal analogue of the human brain. Searle presupposes that we have built such a robot, but again just asserts without support that it won't have intentionality: We would certainly make similar assumptions about the robot unless we had some reason not to, but as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality. Well, clearly Searle abandons that assumption, but I see no reason to. Searle does not supply a reason. Let us now return to the question I promised I would try to answer: [Could a machine think?] granted that in my original example I understand the English and I do not understand the Chinese, and granted therefore that the machine doesn't understand either English or Chinese He's asking us to grant him just what he's asking about! How much more blatant could he be? I am not at all sure what he means by the parenthetical here: It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs) later: Of course the brain is a digital computer. Since everything is a digital computer, brains are too. What?! Everything is a digital computer? That's just absurd. I have no clue what he's trying to say here. He's not attributing this to someone he's criticizing, though. I see nothing to suggest that it isn't a serious statement. I can't attach a coherent meaning to that string of symbols, though. -eric From jrd1415 at gmail.com Sat Jan 30 05:35:34 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 29 Jan 2010 22:35:34 -0700 Subject: [ExI] cruel beast of capitalsim: was RE: Humans caused Aussiemegafauna extinction In-Reply-To: <584C357785DB43FF90B855E900AFC949@spike> References: <4B6275AF.3030307@satx.rr.com> <368968B455C245C7B815B718D67793E0@spike> <584C357785DB43FF90B855E900AFC949@spike> Message-ID: I'm Sorry, Spike. You're right of course. That was pure trolling on my part. Never mind. I didn't mean it. Clearly, bin laden is a psychopathic monster whose every breath is a prayer for the murderous destruction of all Americans, particularly small children, puppy dogs, and kindly neighbor ladies who make butter cookies with sprinkles on them. Bin Laden is known to hate sprinkles,...and kittens,...and freedom. Nine eleven. Nine eleven. Nine eleven. Darkness descends. In the fawning blandishments of forgetfulness a wisp of alien narrative is unmade. Best, Jeff Davis "We call someone insane who does not believe as we do to an outrageous extent." Charles McCabe On Fri, Jan 29, 2010 at 8:21 PM, spike wrote: > >> ...On Behalf Of Jeff Davis >> ... >> ...Bin Laden is a gentle, thoughtful, pious, freedom fighter... >> Best, jeff davis > > Freedom fighter? ?Freedom from tall buildings? ?Freedom from innocent > American citizens? ?Oy vey Jeff, on this topic do consider your words > carefully sir. > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stathisp at gmail.com Sat Jan 30 05:53:52 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 30 Jan 2010 16:53:52 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100130035408.5.qmail@syzygy.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <234528.39818.qm@web36505.mail.mud.yahoo.com> <20100130035408.5.qmail@syzygy.com> Message-ID: On 30 January 2010 14:54, Eric Messick wrote: > Gordon writes: >>If you have a genuine interest in this subject and want to engage me >>in intelligent discussion then please carefully read the target >>article: >> >>MINDS, BRAINS, AND PROGRAMS >>http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html > ? ? But the whole point of the examples has been to try to show that > ? ? that couldn't be sufficient for understanding, in the sense in > ? ? which I understand stories in English, because a person, and > ? ? hence the set of systems that go to make up a person, could have > ? ? the right combination of input, output, and program and still not > ? ? understand anything in the relevant literal sense in which I > ? ? understand English. > > Level of abstraction error again: because the human does not > understand, the vastly greater system which it is a part of must not > understand either. > > ? ? In short, the systems reply simply begs the question by insisting > ? ? without argument that the system must understand Chinese. > > Looks to me like Searle is projecting a bit of begging the question > onto his criticizers. ?Searle states as part of the problem that the > system behaves as though it understands Chinese as well as a native > speaker. ?He then repeatedly assumes that the system does not > understand, and concludes that it does not understand. > > The systems critique can be stated without an assumption of > understanding: > > ?If there is understanding, then it can reside outside of the human > ?component. > > Which is still enough to devastate most of Searle's claims, as he's > always relying on his statement that the human doesn't understand to > support the notion that there is no understanding. This seems to be the main problem with the Chinese Room Argument. Searle assumes that because the human doesn't understand Chinese, the Chinese Room doesn't understand Chinese. He counters the systems critique by modifying the experiment so that the man internalises the symbol manipulation protocols, i.e. so that the man is now the whole system. But that just indicates that he doesn't understand the systems critique. > ? ? Of course the brain is a digital computer. Since everything is a > ? ? digital computer, brains are too. > > What?! ?Everything is a digital computer? ?That's just absurd. ?I have > no clue what he's trying to say here. ?He's not attributing this to > someone he's criticizing, though. ?I see nothing to suggest that it > isn't a serious statement. ?I can't attach a coherent meaning to that > string of symbols, though. I think he means that the brain, like everything else, is computable, a statement of the physical Church-Turing thesis: "If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think." -- Stathis Papaioannou From eric at m056832107.syzygy.com Sat Jan 30 07:08:39 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 30 Jan 2010 07:08:39 -0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <234528.39818.qm@web36505.mail.mud.yahoo.com> <20100130035408.5.qmail@syzygy.com> Message-ID: <20100130070839.5.qmail@syzygy.com> Stathis writes: >This seems to be the main problem with the Chinese Room Argument. >Searle assumes that because the human doesn't understand Chinese, the >Chinese Room doesn't understand Chinese. He counters the systems >critique by modifying the experiment so that the man internalises the >symbol manipulation protocols, i.e. so that the man is now the whole >system. But that just indicates that he doesn't understand the systems >critique. Yes. The question I have is: why is the systems view so hard for people to see? Is there a difference in brain structure between those who understand it and those who don't? It doesn't *seem* all that complicated. The framers of the Constitution took a systems view of government which seems to be beyond the understanding of the average voter. The result is an enormous waste of life, liberty, and capital. Nanotechnology development faces huge hurdles, apparently because really good scientists can't grasp the systems view. That may cost us a frightful amount. The very problem we've been discussing could cost the lives of billions of people who fail to get uploaded because of a lack of understanding. That's why I've been responding to this thread despite the fact that the hopelessness of it was apparent long before I first posted. Searle: >>Of course the brain is a digital computer. Since everything is a >>digital computer, brains are too. >I think he means that the brain, like everything else, is computable, >a statement of the physical Church-Turing thesis: Perhaps, though it's a really funny way of saying it. I mean, is a rock a digital computer because you can simulate it with one? Not a very useful term in that case. -eric From bbenzai at yahoo.com Sat Jan 30 11:36:48 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 30 Jan 2010 03:36:48 -0800 (PST) Subject: [ExI] Understanding is useless In-Reply-To: Message-ID: <920952.60063.qm@web113605.mail.gq1.yahoo.com> On 1/29/2010 6:36 PM, Gordon Swobe wrote: > Jeff Davis wrote: >> > Gordon would not be dissuaded from his position without empirical >> > evidence to counter his view, and the required evidence -- a >> > materialist, non-magical neurological explanation for consciousness, >> > intelligence, intentionality, mind, etc --does not yet exist. > You still don't understand me, Jeff. I argue here for exactly such an explanation. It seems that Gordon is so convinced he's right that anyone who disagrees with him obviously *can't* understand, no matter how carefully they spell out their understanding of his position, and the logical errors in it. Errors? no, no, there can't be errors in a point of view that's correct, therefore anyone who sees errors in it mustn't understand it! QED. It seems the title of this thread really is true! Ben Zaiboc From stefano.vaj at gmail.com Sat Jan 30 12:09:17 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 30 Jan 2010 13:09:17 +0100 Subject: [ExI] 1984 and Brave New World In-Reply-To: <4095387A-3086-44E8-8FE9-090BB15A3937@bellsouth.net> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5E0BE1.8030403@satx.rr.com> <3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net> <580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com> <3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net> <580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com> <0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net> <580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com> <4095387A-3086-44E8-8FE9-090BB15A3937@bellsouth.net> Message-ID: <580930c21001300409j5248498s5f0586034446eadd@mail.gmail.com> 2010/1/27 John Clark : > Well sure, but how likely is it that we will choose the path of progress > especially when we can receive the pride of making great progress while > sitting on our ass and without progressing one inch. If you think this is a > debasement of the human spirit then all you need to do is change your mind, > and I do mean CHANGE YOUR MIND. Now you think the idea is downright noble. Absolutely you can. But you would not, if you like the idea of actually choosing it, and not just getting the illusion. In fact, you could even choose to alter the human mind to induce restlessness, eagerness for change and quest for greatness where the same is too easily and too promptly self-contented... :-) > But who among us > wouldn't want to be a little happier? Depends on dominant societal values (unless of course "happiness" is interpreted in so broad a sense as to become as insignificant as the "economic interest" of the non-falsifiable version of classical economic theory). For instance, monotheism was an era marked for a long time by the obsession of... suffering as expiation. And there are of course more H+ oriented POVs on the subject: << What is the greatest thing ye can experience? It is the hour of great contempt. The hour in which even your happiness becometh loathsome unto you, and so also your reason and virtue. The hour when ye say: "What good is my happiness! It is poverty and pollution and wretched self-complacency. But my happiness should justify existence itself!">> (Also Sprach Zarathustra). -- Stefano Vaj From gts_2000 at yahoo.com Sat Jan 30 13:31:23 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 30 Jan 2010 05:31:23 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <585706.30115.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/29/10, Stathis Papaioannou wrote: >> Is the following sentence true? Or is it false? >> >> DWAP has conscious understanding of the meanings of >> English words. > > It's false. Good, glad you think so. > What you describe by itself won't allow DWAP to use words in a > meaningful way. Let's add a subroutine to the program to allow DWAP to use words in a meaningful way. The human types in a sentence of this form "Say, DWAP, what does x mean?" where x equals any English word. Accessing its vast database of word associations, DWAP responds in the form "x means..." and tells the operator the definition of x and the definition of every word in x's definition and the definitions of all the words in all those definitions, and so on. Surely that counts as a meaningful use of words. Does DWAP have conscious understanding of words now? -gts From bbenzai at yahoo.com Sat Jan 30 13:30:08 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 30 Jan 2010 05:30:08 -0800 (PST) Subject: [ExI] Understanding is useless In-Reply-To: Message-ID: <780472.51521.qm@web113601.mail.gq1.yahoo.com> Gordon Swobe wrote: --- On Fri, 1/29/10, Damien Broderick wrote: >> but I don't understand what kind of explanation would satisfy you? > I don't understand your question. Explanation for what? I don't find it > especially troubling that neuroscience has yet to elucidate the > mechanism of consciousness, if that's what you mean. Yet you feel quite happy to claim to know what can and can't be conscious? Do you see the problem here? Ben Zaiboc From pharos at gmail.com Sat Jan 30 14:09:55 2010 From: pharos at gmail.com (BillK) Date: Sat, 30 Jan 2010 14:09:55 +0000 Subject: [ExI] geeky people, I need a couple of alpha testers... In-Reply-To: <710b78fc1001291601v2bed759cqacc5d0c00f91593c@mail.gmail.com> References: <710b78fc1001281711n576e2ee9k186683d2ca0298ca@mail.gmail.com> <710b78fc1001290615v16e374ei1a6f20a24656ca26@mail.gmail.com> <710b78fc1001291601v2bed759cqacc5d0c00f91593c@mail.gmail.com> Message-ID: On 1/30/10, Emlyn wrote: > I probably should give it another go. Honestly, I think I'd do better > with a firefox add-on which would alert me of new stuff turning up. > > Google Reader has just added a new feature to monitor web sites. You can now subscribe to any URL -- even if it's not an RSS feed, and even if the site itself doesn't publish an RSS feed. If Google Reader doesn't recognize a URL as RSS, a dialog box will offer to "Create a feed for you." Just click "Create a feed," and now Google Reader will monitor the page. If the page changes, a link to that page will show up in Google Reader as if a new RSS item had been posted. They'll even show you a preview "snippet" of the page, just as they do with real RSS posts. > > I already have this in gmail (different tags for different types of > stuff). Newsfeeds never appear in my inbox. > Yup. I do the same with my Gmail. Every mail gets automatically filed in its own folder. So my Inbox is always empty. (Except for the occasional spam or unexpected mail from somebody I don't usually get mail from). Another advantage with using Reader for newsfeeds is that they doesn't use up my mail storage space and I don't have to delete old mails. Cheers, BillK From gts_2000 at yahoo.com Sat Jan 30 14:31:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 30 Jan 2010 06:31:15 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100130035408.5.qmail@syzygy.com> Message-ID: <747916.4268.qm@web36504.mail.mud.yahoo.com> --- On Fri, 1/29/10, Eric Messick wrote: > Ok, I just went and read the whole thing. Thank you Eric. > He then starts in on the whole Chinese room thing, which > makes up the bulk of the paper.? Here's an interesting bit: > > ? ???To me, Chinese writing is just so > many meaningless squiggles. > > Here, the lack of understanding is only relative to > him.? Right. The question of strong AI concerns not whether outside observers understand the system's inputs and outputs. It's about whether the system itself understands them. > Later he will assert that a system which answers non-obvious > questions about Chinese stories also has no meaning for the symbols.? > It looks like he's extrapolated meaninglessness beyond its > applicability. He means that the system has no understanding. > [...] in cases where the computer > is not me, the computer has > nothing more than I have in the > case where I understand nothing. > > Well, nothing except vast quantities of information about > Chinese language sufficient to answer questions as well as a native > speaker. He seems to consider this a trivial detail. That information in the computer would seem important only to someone who did not understand the question of strong vs weak AI. If the system has no conscious understanding of the inputs and outputs but can nonetheless converse intelligently by virtue of having information then it has only weak AI. Searle has no objection to weak AI. > On the possibility that understanding is "more symbol > manipulation": > > ? ???I have not demonstrated that this > claim is false, > ? ???but it would certainly appear an > incredible claim in the example. > > Searle is acknowledging that his argument is weak. He's addressing this claim: "2) that what the machine and its program do explains the human ability to understand the story and answer questions about it." And clearly in the example what the machine and its program do does not explain the human ability to understand the story and answer questions about it. He answers questions about one story in English and about another in Chinese, and his running the program in Chinese in no way changes the fact that he does not understand a word of Chinese. As far as a non-Chinese-speaker's understanding of Chinese goes, it makes no difference whatsoever whether he mentally runs a program that enables meaningful interactions in Chinese. This has major implications in the philosophy of mind, especially with respect to that philosophy of mind known as the computationalist theory in which all our cognitive capacities are thought to be explained by programs. The program has zero effect. > ? ???[...] what is suggested though > certainly not demonstrated -- by > ? ???the example is that the computer > program is simply irrelevant to > ? ???my understanding of the story. > > Again, not demonstrated.? His words take on more strength in context: "On the basis of these two assumptions we assume that even if Schank's program isn't the whole story about understanding, it may be part of the story. Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested though certainly not demonstrated -- by the example is that the computer program is simply irrelevant to my understanding of the story. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements. As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding." > ? ???[...] whatever purely formal > principles you put into the > ? ???computer, they will not be > sufficient for understanding, since a > ? ???human will be able to follow the > formal principles without > ? ???understanding anything. > > Here he's simply asserting what he's trying to show.? Here he states an analytic truth to those who understand what is meant by "formal principles". Briefly: We cannot first understand the meaning of a symbol from looking only at its form. We must learn the meaning in some other way, and attach that meaning to the form, such that we can subsequently recognize that form and know the meaning. > The human is a > trivial component of the system, so its lack of > understanding does not > impair the system's understanding. You miss the point here. The human can internalize the scripts and the entire room, becoming the system, and this in no way changes the conclusion that he and nothing inside him can understand the meanings of the symbols. > ? ???My car and my adding machine, on > the other hand, understand > ? ???nothing[.] > > Once again, he's simply asserting something.? Do you think your car has understanding of roads? How about doorknobs and screwdrivers? He makes a reductio ad absurdem argument here (but you leave out the context) illustrating that we must draw a line somewhere between those things that have minds and those that don't. > Again: > > ? ???The computer understanding is not > just (like my understanding of > ? ???German) partial or incomplete; it > is zero. > > Why shouldn't it be partial?? Searle just asserts that > it is zero. It has zero understanding for the same reason DWAP's understanding is zero: syntax does not give semantics. > On to the systems critique: > > ? ???Whereas the English subsystem > knows that "hamburgers" refers to > ? ???hamburgers, the Chinese subsystem > knows only that "squiggle > ? ???squiggle" is followed by "squoggle > squoggle." > > Here Searle makes a level of abstraction error.? The > symbols may not mean anything to the human, but they certainly mean > something to the system, or it wouldn't be able to answer questions about > them, as we are told it can. You fail to understand the distinction between strong and weak AI. Nobody disputes weak AI. Nobody disputes that computers will someday pass the Turing test. What is disputed is whether it will ever make sense to consider computers as possessing minds in the sense that humans have minds. > ? ???Indeed, in the case as described, > the Chinese subsystem is simply > ? ???a part of the English subsystem, a > part that engages in > ? ???meaningless symbol manipulation > according to rules in English. > > The symbols are meaningless because they are meaningless to > *Searle*, not because they would be meaningless to the Chinese > speaking system as a whole. Apparently you believe that if you embodied the system as did Searle, and that if you did not understand the symbols as Searle didn't, that the system would nevertheless have a conscious understanding of the symbols. But I don't think you can articulate how. You just want to state it as article of faith. > ? ???But the whole point of the > examples has been to try to show that > ? ???that couldn't be sufficient for > understanding, in the sense in > ? ???which I understand stories in > English, because a person, and > ? ???hence the set of systems that go > to make up a person, could have > ? ???the right combination of input, > output, and program and still not > ? ???understand anything in the > relevant literal sense in which I > ? ???understand English. > > Level of abstraction error again: because the human does > not understand, the vastly greater system which it is a part of > must not understand either. Did you simply miss his counter-argument to the systems reply? *He becomes the system* and still does not understand the symbols. There exists now "vastly greater system" that understands them, unless you want to step foot into the religious realm. > ? ???In short, the systems reply simply > begs the question by insisting > ? ???without argument that the system > must understand Chinese. > > Looks to me like Searle is projecting a bit of begging the > question onto his criticizers.? Searle states as part of the > problem that the system behaves as though it understands Chinese as well > as a native speaker.? He then repeatedly assumes that the system > does not understand, and concludes that it does not understand. If you cannot explain how it has conscious understanding then you have no reply to Searle. We cannot assume understanding based only on external behavior. > The systems critique can be stated without an assumption > of understanding: > > If there is understanding, then it can reside outside of > the human component. Again, you must have missed Searle's counter-reply. He internalizes the entire system and yet neither he nor anything inside him understands the symbols. -gts From pharos at gmail.com Sat Jan 30 14:50:03 2010 From: pharos at gmail.com (BillK) Date: Sat, 30 Jan 2010 14:50:03 +0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <747916.4268.qm@web36504.mail.mud.yahoo.com> References: <20100130035408.5.qmail@syzygy.com> <747916.4268.qm@web36504.mail.mud.yahoo.com> Message-ID: On 1/30/10, Gordon Swobe wrote: > > Again, you must have missed Searle's counter-reply. He internalizes the > entire system and yet neither he nor anything inside him understands the symbols. > > Many humans get through life 'understanding' a completely mistaken series of beliefs about everything under the sun. Astrology, creationism, superstitions, lucky socks, what drugs do, health fads, pill-popping, lucky numbers, etc. etc. They don't really have 'understanding' do they? They just muddle through life linking unconnected items and making decisions almost at random based on what they think they remember reading somewhere. An AI could approximate their behaviour with little difficulty. Good programming should enable more correct understanding in an AI. In fact, that may be a way of detecting strong AI. It won't have all the nonsensical human 'understandings'. BillK From stathisp at gmail.com Sat Jan 30 14:59:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 31 Jan 2010 01:59:26 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <585706.30115.qm@web36508.mail.mud.yahoo.com> References: <585706.30115.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/31 Gordon Swobe : > --- On Fri, 1/29/10, Stathis Papaioannou wrote: > >>> Is the following sentence true? Or is it false? >>> >>> DWAP has conscious understanding of the meanings of >>> English words. >> >> It's false. > > Good, glad you think so. > >> What you describe by itself won't allow DWAP to use words in a >> meaningful way. > > Let's add a subroutine to the program to allow DWAP to use words in a meaningful way. The human types in a sentence of this form "Say, DWAP, what does x mean?" where x equals any English word. Accessing its vast database of word associations, DWAP responds in the form "x means..." and tells the operator the definition of x and the definition of every word in x's definition and the definitions of all the words in all those definitions, and so on. Surely that counts as a meaningful use of words. Does DWAP have conscious understanding of words now? It still doesn't count as meaningful use of words, it's just a dictionary. Meaningful use of words would involve DWAP participating in a discussion as we are now having. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Jan 30 14:45:19 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 30 Jan 2010 06:45:19 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100130070839.5.qmail@syzygy.com> Message-ID: <618757.54878.qm@web36507.mail.mud.yahoo.com> --- On Sat, 1/30/10, Eric Messick wrote: > Searle: >>>Of course the brain is a digital computer. Since > everything is a > >>digital computer, brains are too. > >> I think he means that the brain, like everything else, >> is computable, >> a statement of the physical Church-Turing thesis: > > Perhaps, though it's a really funny way of saying it.? > I mean, is a rock a digital computer because you can simulate it with > one?? Not a very useful term in that case. Exactly, Eric. We can at some level of description call anything digital, but this does not make it so. Except in the special cases of actual digital computers and actual software, we merely project a digital interpretation onto a non-digital reality. -gts From gts_2000 at yahoo.com Sat Jan 30 15:51:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 30 Jan 2010 07:51:00 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <897057.43275.qm@web36504.mail.mud.yahoo.com> --- On Sat, 1/30/10, Stathis Papaioannou wrote: > It still doesn't count as meaningful use of words, it's > just a dictionary. Meaningful use of words would involve DWAP > participating in a discussion as we are now having. Okay, so I add more subroutines that allow DWAP to not only give you meaningful complete answers to your complete questions about the meanings of words, but also to respond to other complete sentences in other meaningful ways. I then use DWAP to respond to your posts here on ExI. I take it you think DWAP now has conscious understanding of words. If so then what amazing thing did I, a lowly programmer of syntax, do to give a real mental life to DWAP? How did that miracle happen? Looks to me like I merely deceived you with some fancy programming tricks. -gts From stathisp at gmail.com Sat Jan 30 15:55:25 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 31 Jan 2010 02:55:25 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <747916.4268.qm@web36504.mail.mud.yahoo.com> References: <20100130035408.5.qmail@syzygy.com> <747916.4268.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/31 Gordon Swobe wrote in response to Eric Messick: >> ? ???[...] whatever purely formal >> principles you put into the >> ? ???computer, they will not be >> sufficient for understanding, since a >> ? ???human will be able to follow the >> formal principles without >> ? ???understanding anything. >> >> Here he's simply asserting what he's trying to show. > > Here he states an analytic truth to those who understand what is meant by "formal principles". A neuron will also be able to follow the formal principles without understanding anything, or at any rate understanding much less than a human doing the same job. > Briefly: We cannot first understand the meaning of a symbol from looking only at its form. We must learn the meaning in some other way, and attach that meaning to the form, such that we can subsequently recognize that form and know the meaning. Yes, symbol grounding, which occurs when you have sensory input. That completely solves the logical problem of where symbols get their meaning, but Searle goes on to postulate a superfluous, magical further step whereby symbols get *true* meaning. >> The human is a >> trivial component of the system, so its lack of >> understanding does not >> impair the system's understanding. > > You miss the point here. The human can internalize the scripts and the entire room, becoming the system, and this in no way changes the conclusion that he and nothing inside him can understand the meanings of the symbols. But the human's *intelligence* is irrelevant to the system, except insofar as it allows him to do the symbol manipulation. It makes no essential difference to the consciousness of the system, such as it may be, if the symbol manipulation is done by a human, a punchcard machine or a trained mouse. Stretching the definition of the term, neurons also have a small amount of intelligence since they have to know when to fire and when not to fire. But you don't argue that since the neurons don't understand the ultimate result of their behaviour, the brain as a whole doesn't understand it either. > It has zero understanding for the same reason DWAP's understanding is zero: syntax does not give semantics. Except that this is wrong: syntax does give semantics, once the symbols are grounded. > You fail to understand the distinction between strong and weak AI. Nobody disputes weak AI. Nobody disputes that computers will someday pass the Turing test. What is disputed is whether it will ever make sense to consider computers as possessing minds in the sense that humans have minds. Everyone who disputes that computers can have minds also either disputes weak AI or is self-contradictory. You've admitted that it's absurd to say that you might be a zombie and not know it, and yet weak AI would make such an absurdity possible. Do you deny that? You've avoided dealing with it, but you haven't actually denied it: "I don't think that weak AI would allow the creation of a partial zombie because..." Note that this is *not* an argument about computers and minds per se, but an argument about the possibility of weak AI. Weak AI presents a logical contradiction. Not even God could do it. >> The symbols are meaningless because they are meaningless to >> *Searle*, not because they would be meaningless to the Chinese >> speaking system as a whole. > > Apparently you believe that if you embodied the system as did Searle, and that if you did not understand the symbols as Searle didn't, that the system would nevertheless have a conscious understanding of the symbols. But I don't think you can articulate how. You just want to state it as article of faith. It would happen by the same magical process that occurs in the brain. When the system is complex enough to display humanlike behaviour, humanlike consciousness results. > Did you simply miss his counter-argument to the systems reply? *He becomes the system* and still does not understand the symbols. There exists now "vastly greater system" that understands them, unless you want to step foot into the religious realm. Did you simply miss the counter-argument to the counter-argument? His intelligence is simply a trivial component of the system. That the neurons lack understanding or that the heart which is an essential component pumping blood to the neurons lacks understanding does not mean that the person, comprised of multiple components organised in a system, lacks understanding. No-one has ever claimed that transistors and copper wires understand the grand computations they participate in. >> ? ???In short, the systems reply simply >> begs the question by insisting >> ? ???without argument that the system >> must understand Chinese. >> >> Looks to me like Searle is projecting a bit of begging the >> question onto his criticizers.? Searle states as part of the >> problem that the system behaves as though it understands Chinese as well >> as a native speaker.? He then repeatedly assumes that the system >> does not understand, and concludes that it does not understand. > > If you cannot explain how it has conscious understanding then you have no reply to Searle. We cannot assume understanding based only on external behavior. That is all that we can ever do. >> The systems critique can be stated without an assumption >> of understanding: >> >> If there is understanding, then it can reside outside of >> the human component. > > Again, you must have missed Searle's counter-reply. He internalizes the entire system and yet neither he nor anything inside him understands the symbols. It doesn't magically elevate him from his position as a low level part of the system if he internalises everything. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 30 16:05:13 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 31 Jan 2010 03:05:13 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <897057.43275.qm@web36504.mail.mud.yahoo.com> References: <897057.43275.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/31 Gordon Swobe : > --- On Sat, 1/30/10, Stathis Papaioannou wrote: > >> It still doesn't count as meaningful use of words, it's >> just a dictionary. Meaningful use of words would involve DWAP >> participating in a discussion as we are now having. > > Okay, so I add more subroutines that allow DWAP to not only give you meaningful complete answers to your complete questions about the meanings of words, but also to respond to other complete sentences in other meaningful ways. I then use DWAP to respond to your posts here on ExI. > > I take it you think DWAP now has conscious understanding of words. If so then what amazing thing did I, a lowly programmer of syntax, do to give a real mental life to DWAP? How did that miracle happen? > > Looks to me like I merely deceived you with some fancy programming tricks. Go ahead and write this clever program, you'll be a lot more famous than Searle! Your argument is of the form, Here we have a couple of dozen different elements, and obviously they're pretty stupid. No understanding in them whatsoever! So how is it possible to get from a completely stupid thing to something as clever as a human? Obviously, it's impossible. Therefore humans must get their cleverness from some supernatural source. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Jan 30 16:19:39 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 30 Jan 2010 08:19:39 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <713314.67215.qm@web36501.mail.mud.yahoo.com> --- On Sat, 1/30/10, Stathis Papaioannou wrote: > A neuron will also be able to follow the formal principles > without understanding anything, or at any rate understanding much > less than a human doing the same job. I don't disagree but it misses the point. In Searle's reply to his systems critics, he becomes the system and *neither he nor anything inside him* can understand the symbols. You reply "Yeah well neurons don't know anything either but the system does". Do you see how that misses the point? *We can no longer compare the man to a neuron in a larger system*. We cannot do so because the man becomes the entire system, and his neurons lack understanding just as he does. He no longer exists as part of a larger system that might understand the symbols, unless you want to step foot into the domain of religion and claim that some god understands the symbols that he cannot understand. Is that your claim? >> Briefly: We cannot first understand the meaning of a > symbol from looking only at its form. We must learn the > meaning in some other way, and attach that meaning to the > form, such that we can subsequently recognize that form and > know the meaning. > > Yes, symbol grounding, which occurs when you have sensory > input. That completely solves the logical problem of where symbols get > their meaning I created the 'robot reply to the cra' thread to discuss this, but haven't pursued it mainly because it makes no sense until you understand the basic CRA. Every serious rebuttal to the CRA -- all the serious rebuttals by serious philosophers of the subject including those who advocate the robot reply -- starts with a recognition that if nothing else Searle makes a good point that: A3: syntax is neither constitutive of nor sufficient for semantics. It's because of A3 that the man in the room cannot understand the symbols. I started the robot thread to discuss the addition of sense data on the mistaken belief that you had finally recognized the truth of that axiom. Do you recognize it now? -gts From jonkc at bellsouth.net Sat Jan 30 16:57:22 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 30 Jan 2010 11:57:22 -0500 Subject: [ExI] Understanding is useless In-Reply-To: <165704.91501.qm@web36502.mail.mud.yahoo.com> References: <165704.91501.qm@web36502.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has posted 12 times. Me: >> it's true I don't have a clue what I'm talking about, but I don't consider that an insult. Gordon: > I think you should consider it an insult. I don't see why. According to you understanding is about as useful as a sack full of dead rats in a toothpaste factory, so if I don't waste valuable mental resources pointlessly spinning wheels within wheels I could write and speak more intelligently if I quite literally don't know what I'm talking about. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 30 17:19:28 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 30 Jan 2010 12:19:28 -0500 Subject: [ExI] The digital nature of brains. In-Reply-To: <713314.67215.qm@web36501.mail.mud.yahoo.com> References: <713314.67215.qm@web36501.mail.mud.yahoo.com> Message-ID: On Jan 30, 2010, Gordon Swobe wrote: > In Searle's reply to his systems critics, he becomes the system and *neither he nor anything inside him* can understand the symbols. You are absolutely correct, that is Searle's reply to his critics; and as far as substance goes he might as well of told them "you suck". He blithely states "nor anything inside him can understand the symbols" and then thinks by simply making a declaration he has proven it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 30 18:18:03 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 30 Jan 2010 12:18:03 -0600 Subject: [ExI] overposting In-Reply-To: References: <165704.91501.qm@web36502.mail.mud.yahoo.com> Message-ID: <4B6477DB.3030006@satx.rr.com> On 1/30/2010 10:57 AM, John Clark wrote: > Since my last post Gordon Swobe has posted 12 times. Yes. It's clearly getting out of hand. There's a posting limit on this list: "Given the number of subscribers to the Extropy-Chat mailing list, we strongly recommend that you restrain yourself to a MAXIMUM of eight posts per day. Those who exceed the eight posts per day limit will receive a private warning. Repeat offenders will be subject to other measures such as temporary or permanent bans from the list." I tend to immediately trash Gordon's posts these days so I can't count them, but I wouldn't be surprised if his posting exceeds that limit. Might be time for the gag to be applied. Damien Broderick From spike66 at att.net Sat Jan 30 18:30:37 2010 From: spike66 at att.net (spike) Date: Sat, 30 Jan 2010 10:30:37 -0800 Subject: [ExI] overposting In-Reply-To: <4B6477DB.3030006@satx.rr.com> References: <165704.91501.qm@web36502.mail.mud.yahoo.com> <4B6477DB.3030006@satx.rr.com> Message-ID: <22A981C5BA0846FEAC2FFC95800D5428@spike> > Yes. It's clearly getting out of hand. There's a posting > limit on this list: Oy vey yes, those who have been posting all the verbiage: do observe that for years we have had a voluntary guideline of about 8 posts a day. Granted if you are away for a long time then come home and post a lot of stuff that day, that's OK, but I have seen a lot of grinding away on a particular subject that is too much over a number of weeks. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Damien Broderick > Sent: Saturday, January 30, 2010 10:18 AM > To: ExI chat list > Subject: [ExI] overposting > > On 1/30/2010 10:57 AM, John Clark wrote: > > > Since my last post Gordon Swobe has posted 12 times. > > Yes. It's clearly getting out of hand. There's a posting > limit on this list: > > "Given the number of subscribers to the Extropy-Chat mailing > list, we strongly recommend that you restrain yourself to a > MAXIMUM of eight posts per day. Those who exceed the eight > posts per day limit will receive a private warning. Repeat > offenders will be subject to other measures such as temporary > or permanent bans from the list." > > I tend to immediately trash Gordon's posts these days so I > can't count them, but I wouldn't be surprised if his posting > exceeds that limit. > > Might be time for the gag to be applied. > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jrd1415 at gmail.com Sat Jan 30 18:47:07 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 30 Jan 2010 11:47:07 -0700 Subject: [ExI] overposting In-Reply-To: <4B6477DB.3030006@satx.rr.com> References: <165704.91501.qm@web36502.mail.mud.yahoo.com> <4B6477DB.3030006@satx.rr.com> Message-ID: On Sat, Jan 30, 2010 at 11:18 AM, Damien Broderick wrote: > On 1/30/2010 10:57 AM, John Clark wrote: > >> Since my last post Gordon Swobe has posted 12 times. ... > Might be time for the gag to be applied. Except that if Gordon is responding to posters who have responded to him, then there should be a waiver on the limit. If no one is responding to a poster, and he then over-posts, well, ok, shut him down. But you don't want to censor an ongoing conversation just because it's getting tedious to you. Sometimes a particularly vigorous thread just has to peter out in tediousness. I don't know that I understand Gordon's posiition. He says I still don't, and I'm willing to take him at his word. Mostly though, I find him eminently civil -- as he has reminded us -- and (to me) that counts for a lot. Gordon has my respect and I don't think he should be shut down, certainly not on a "technicality". Best, Jeff Davis "And I think to myself, what a wonderful world!" Louie Armstrong From thespike at satx.rr.com Sat Jan 30 18:48:56 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 30 Jan 2010 12:48:56 -0600 Subject: [ExI] Psi and gullibility In-Reply-To: <40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com> <40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> Message-ID: <4B647F18.808@satx.rr.com> By the way, the usual gibe is this: "If you're so psychic, why don't you win the lottery?" It's possible (I suggest) that psi trawls through alternative possible futures, weighted by their current but contingent probability. Some targets (such as the date of the next lunar eclipse) should be highly stable and hence maximally precognizable, perhaps earthquakes similarly--unless they are *extremely* subject to chaotic triggers, horse races fairly open to small changes or nobbling, while lotteries might be almost totally unpredictable unless they are rigged. Against this picture (which might appeal to the MW enthusiasts on the list), there does seem to be a lot of evidence of successful precognition, in the lab, of quantum-random-driven outcomes (a 4-state machine flickering very fast inside its black box that contains a radioactive source, e.g.). Damien Broderick From thespike at satx.rr.com Sat Jan 30 19:08:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 30 Jan 2010 13:08:14 -0600 Subject: [ExI] overposting In-Reply-To: References: <165704.91501.qm@web36502.mail.mud.yahoo.com> <4B6477DB.3030006@satx.rr.com> Message-ID: <4B64839E.2080209@satx.rr.com> On 1/30/2010 12:47 PM, Jeff Davis wrote: > Except that if Gordon is responding to posters who have responded to > him, then there should be a waiver on the limit. If no one is > responding to a poster, and he then over-posts, well, ok, shut him > down. But you don't want to censor an ongoing conversation just > because it's getting tedious to you. Good point. Such is the benefit of the Delete button. And I agree that Gordon is nothing if not civil and good-humored in all this. Damien Broderick From ablainey at aol.com Sat Jan 30 19:28:31 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Sat, 30 Jan 2010 14:28:31 -0500 Subject: [ExI] 1984 and Brave New World In-Reply-To: <580930c21001300409j5248498s5f0586034446eadd@mail.gmail.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com><4B5E0BE1.8030403@satx.rr.com><3B7522BD-78B4-42B8-8DDD-621E1413FD11@bellsouth.net><580930c21001260334qffc1c17w91221343b629376b@mail.gmail.com><3511AAD5-AD7C-4476-B51B-CA456BB372A3@bellsouth.net><580930c21001261030t16fefe9cj4218fb084336e1b8@mail.gmail.com><0FC88307-824A-499A-A6FB-704562E64826@bellsouth.net><580930c21001270255p7f60f082i3c32d83add53d69d@mail.gmail.com><4095387A-3086-44E8-8FE9-090BB15A3937@bellsouth.net> <580930c21001300409j5248498s5f0586034446eadd@mail.gmail.com> Message-ID: <8CC6FF98A95D87F-3F3C-20289@webmail-m047.sysops.aol.com> Just a thought on comparing the two books. I have just watched BNW (1980 version) and re-read the first 5 chapters of 1984. Imediateley it has become obvious to me that intensionaly or not, both writer left the same 'Out' for mankind as a whole. While both stories were concentrated on the dystopian societies. The secondary society groups in each story are the real heroes. In BNW Hulxey actually gives two alternative human groups which offer greater promise than the central focus. Firstly and most obviously he gives us the Savages. On the face of it, they appear subhuman and similarly static. However without the constant censorship and static constaints; they have the potential to evolve. Secondly and most important, Huxley gives us the free islands. Where 'men can think and do as they wish'. This is the ideal as people who do not conform to the static society are transfered. In 1984. Orwell concentrates on the party. However much mention is made of the proles, who are considered sub-human. As such the party readily adopts and inforces the Newspeak amongst its members. But the proles are generally left to the own devices. Appeased with Beer and Porn. They are ssen as nothing more than robots. So the party sows the seeds of its own destruction. While newspeak will inevitably dehumanise the party members. Leaving them without adequate means to articulate or even think about thought crime. The proles will doubtless be in a far better position, where they retain a far superior working language. Double-plus-ungood for the party. Anyway the point being that in both books, it is the disregarded objects of contempt that offer the hope for mankind. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at m056832107.syzygy.com Sat Jan 30 23:51:27 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 30 Jan 2010 23:51:27 -0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <747916.4268.qm@web36504.mail.mud.yahoo.com> References: <20100130035408.5.qmail@syzygy.com> <747916.4268.qm@web36504.mail.mud.yahoo.com> Message-ID: <20100130235127.5.qmail@syzygy.com> Gordon: >Eric: >> Well, nothing except vast quantities of information about >> Chinese language sufficient to answer questions as well as a native >> speaker. He seems to consider this a trivial detail. > >That information in the computer would seem important only to someone > who did not understand the question of strong vs weak AI. Well, synaptic connection strengths between 100 billion neurons seems like rather a lot of information, much of which might be crucial to understanding. As to strong vs. weak AI, part of the question here is what that difference is. In the referenced paper, Searle says that weak AI would be a useful tool for understanding intelligence, while strong AI would duplicate intelligence. It would appear that Eliza would fall under this definition of weak AI, though Searle may not agree. I claim (and I expect you would dispute) that an accurate neural level simulation of a healthy human brain would constitute strong AI. Assuming that such a simulation accurately reproduced responses of an intelligent human (it passes the Turing Test), I'm going to guess that you'd grant it weak AI status, but not strong AI status. Furthermore, you seem to be asserting that no test based on it's behavior could ever convince you to grant it strong status. Let's go a step farther and place the computer running this simulation within the skull of the person we have duplicated, replacing their brain. It's connected with all of the neurons which used to feed into the brain. Now, what you have is a human body which behaves completely normally. I present you with two humans, one of which has had this operation performed, and the other of which hasn't. Both claim to be the one who hasn't, but of course one of them is lying (or perhaps mistaken). How could you tell which is which? This is of course a variant of the classic Turing Test, and we've already stipulated that this simulation passes the Turing Test. So, can you tell the difference? Or do you claim that it will always be impossible to create such a simulation in the first place? No, wait, you've already said that systems that pass the Turing Test will be possible, so you're no longer claiming that it is impossible. Do you want to change your mind on that again? >>Searle: >> My car and my adding machine, on >> the other hand, understand >> nothing[.] > >> Once again, he's simply asserting something. > >Do you think your car has understanding of roads? How about doorknobs > and screwdrivers? He makes a reductio ad absurdem argument here (but > you leave out the context) illustrating that we must draw a line > somewhere between those things that have minds and those that don't. So it is a question of where to draw the line. I draw it at information processing. If something is processing information, it has some level of understanding. The adding machine processes information, so it has a tiny amount of understanding. I process a lot more information in much more sophisticated ways, so I have a much greater understanding. A screwdriver does not process information. Understanding is not Special Sauce which can only come from god. >Apparently you believe that if you embodied the system as did Searle, > and that if you did not understand the symbols as Searle didn't, that > the system would nevertheless have a conscious understanding of the > symbols. Yes. It demonstrates that understanding through its behavior. > But I don't think you can articulate how. You just want to > state it as article of faith. It acquires understanding in *exactly* the same way that you do. I assume as an article of faith that you have understanding as well. So what? Can you articulate how you acquire understanding? >Did you simply miss his counter-argument to the systems reply? I didn't see anything other than unsupported assertions, so if there was an argument there, then I certainly missed it. > *He becomes the system* and still does not understand the > symbols. There exists [no] "vastly greater system" that understands > them, unless you want to step foot into the religious realm. We are once again looping back to material covered earlier (that seems to be all we're doing). The "he becomes the system" thing is stretching the analogy way past its breaking point. If we're talking about an ordinary human (which Searle apparently is), then there is no way that human could contain enough information or process it quickly enough to pass the Turing Test before dying of old age (or even before the heat death of the universe). If the system is a neural level simulation, then the human must maintain state information on every neuron in a human brain. There isn't anywhere to put that information, as the human's neurons are already full keeping their own state. So, in order to make the system work, we've got to seriously augment that human into a vastly greater system. So, if Searle's reply results in a working system, then it is no different than the earlier case, and his reply is meaningless. If, on the other hand, we keep the human unaugmented, then the resulting system cannot pass the Turing Test, which was given as a precondition, so his reply is meaningless. Do you see any other option? -eric From stathisp at gmail.com Sun Jan 31 00:33:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 31 Jan 2010 11:33:51 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <713314.67215.qm@web36501.mail.mud.yahoo.com> References: <713314.67215.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/31 Gordon Swobe : > --- On Sat, 1/30/10, Stathis Papaioannou wrote: > >> A neuron will also be able to follow the formal principles >> without understanding anything, or at any rate understanding much >> less than a human doing the same job. > > I don't disagree but it misses the point. In Searle's reply to his systems critics, he becomes the system and *neither he nor anything inside him* can understand the symbols. You reply "Yeah well neurons don't know anything either but the system does". Do you see how that misses the point? *We can no longer compare the man to a neuron in a larger system*. We cannot do so because the man becomes the entire system, and his neurons lack understanding just as he does. > > He no longer exists as part of a larger system that might understand the symbols, unless you want to step foot into the domain of religion and claim that some god understands the symbols that he cannot understand. Is that your claim? He is the whole system, but his intelligence is only a small and inessential part of the system, as it could easily be replaced by dumber components. It's irrelevant that the man doesn't really understand what he is doing. The ensemble of neurons doesn't understand what it's doing either, and they are the whole system too. Even if the neurons were somehow linked as one organism, which knew exactly how and when to fire its constituent parts, the intelligence involved in doing this would be separate submind, with its actions resulting in the more impressive human mind. Another way to look at this is what I called the extended CRA (which is similar to Ned Block's Chinese Nation argument): instead of one man, there are two or more cooperating. This is now closer to the behaviour of the brain. Would you say that this system can have consciousness even though the single man CR cannot? >>> Briefly: We cannot first understand the meaning of a >> symbol from looking only at its form. We must learn the >> meaning in some other way, and attach that meaning to the >> form, such that we can subsequently recognize that form and >> know the meaning. >> >> Yes, symbol grounding, which occurs when you have sensory >> input. That completely solves the logical problem of where symbols get >> their meaning > > I created the 'robot reply to the cra' thread to discuss this, but ?haven't pursued it mainly because it makes no sense until you understand the basic CRA. Every serious rebuttal to the CRA -- all the serious rebuttals by serious philosophers of the subject including those who advocate the robot reply -- starts with a recognition that if nothing else Searle makes a good point that: > > A3: syntax is neither constitutive of nor sufficient for semantics. > > It's because of A3 that the man in the room cannot understand the symbols. I started the robot thread to discuss the addition of sense data on the mistaken belief that you had finally recognized the truth of that axiom. Do you recognize it now? No, I assert the very opposite: that meaning is nothing but the association of one input with another input. You posit that there is a magical extra step, which is completely useless and undetectable by any means. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 31 00:54:25 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 31 Jan 2010 11:54:25 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100130235127.5.qmail@syzygy.com> References: <20100130035408.5.qmail@syzygy.com> <747916.4268.qm@web36504.mail.mud.yahoo.com> <20100130235127.5.qmail@syzygy.com> Message-ID: On 31 January 2010 10:51, Eric Messick wrote: > The "he becomes the system" thing is stretching the analogy way past > its breaking point. ?If we're talking about an ordinary human (which > Searle apparently is), then there is no way that human could contain > enough information or process it quickly enough to pass the Turing > Test before dying of old age (or even before the heat death of the > universe). > > If the system is a neural level simulation, then the human must > maintain state information on every neuron in a human brain. ?There > isn't anywhere to put that information, as the human's neurons are > already full keeping their own state. I don't see any problem in principle with the human being the whole system but not understanding what he is doing. A human could follow a simple algorithm and not understand its greater purpose, so why would you demand that he understand a very complex algorithm? The intelligence of the man in the CR does not actually participate in the process except to the extent that it is needed to manipulate symbols, so all he understands is the symbol manipulation. This is the same for a brain or a computer: the components only need understand their own basic job. If neurons were all linked as one mind which knows when to make its constituent parts fire, that mind would not necessarily have any knowledge of the human mind it was generating and vice-versa. -- Stathis Papaioannou From spike66 at att.net Sat Jan 30 23:23:06 2010 From: spike66 at att.net (spike) Date: Sat, 30 Jan 2010 15:23:06 -0800 Subject: [ExI] Psi and gullibility In-Reply-To: <4B647F18.808@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com><40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> <4B647F18.808@satx.rr.com> Message-ID: <324A223F0E0448628A5A7E8CD84F1ABD@spike> > ...On Behalf Of Damien Broderick > Subject: Re: [ExI] Psi and gullibility > > By the way, the usual gibe is this: > > "If you're so psychic, why don't you win the lottery?" > ... Damien Broderick I know the answer! In fact I have a weak version of psi that I can use on the lottery. It is a weak version because it doesn't actually tell me what the winning lottery number is, but rather after I choose some numbers and purchase the ticket, my pseudopsi assures me I just picked a loser. Works every time. spike From eric at m056832107.syzygy.com Sun Jan 31 03:48:14 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 31 Jan 2010 03:48:14 -0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: References: <20100130035408.5.qmail@syzygy.com> <747916.4268.qm@web36504.mail.mud.yahoo.com> <20100130235127.5.qmail@syzygy.com> Message-ID: <20100131034814.5.qmail@syzygy.com> Stathis writes: >I don't see any problem in principle with the human being the whole >system but not understanding what he is doing. In principle, no problem. I was addressing a problem of scale. Any program capable of passing the Turing Test is going to have to be pretty complicated. Humans don't do a very good job at simulating computers, and are really bad at it if you don't even allow them a piece of paper to scratch some notes on. As a result, a human just sitting and thinking about being a computer running a program which passes the Turing Test is going to run that program very slowly. Note that this is a problem with either the original Chinese room, or the Chinese room subsumed into a man. If you need to simulate at the neuron level (which is a conservative estimate of what might be required), then the program has to keep track of 100 billion neurons and their thousands of interconnections each. How many people are capable of remembering that much raw data without any outside assistance? None. Basing an intuitive argument on something which is so far out of the bounds of the possible is dishonest. If Searle actually believes it, he's profoundly deluding himself. As I said, this same objection applies to the original Chinese room. Even if we postulate an incredibly simple program, like Eliza for example, a human simulating a computer running such a program would take years to have a simple conversation. The necessary computing speed is off by at least a factor of a million. I know that the argument is supposed to be a philosophical one about the locus of understanding, but human intuition only works for things at human time and space scales. Make things a million times smaller and you start running into quantum mechanics issues. Make things a million times faster and you're getting into the realm of relativity. Neither of these follow human intuition. Let's look at what it would take to make a Chinese room run a neural level simulation in real time. We could probably get away with a base simulation rate of about 1000 Hz. For each millisecond we'd need to perform at least one calculation per synapse. Figure 100 synapses per neuron, so that's 1E5 calculations per neuron per second. If a human could perform about 1 calculation per second, that's 1E5 humans per neuron. Multiply by 1E11 neurons and we need about 1E16 humans to perform the calculations. There are currently about 1E10 humans, so we're talking about 10,000 times the population of the planet to simulate a single human. Even if my numbers are *way* off, the endeavor is clearly ridiculous. Intuition about how such a system might behave is profoundly silly. Claiming that such a system cannot have a particular property is again either dishonest or delusional. -eric From thespike at satx.rr.com Sun Jan 31 05:44:36 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 30 Jan 2010 23:44:36 -0600 Subject: [ExI] Psi and gullibility In-Reply-To: <324A223F0E0448628A5A7E8CD84F1ABD@spike> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com><40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> <4B647F18.808@satx.rr.com> <324A223F0E0448628A5A7E8CD84F1ABD@spike> Message-ID: <4B6518C4.8030905@satx.rr.com> On 1/30/2010 5:23 PM, spike wrote: > I choose some numbers and > purchase the ticket, my pseudopsi assures me I just picked a loser. Works > every time. And yet you keep buying lottery tickets? This is not the mark of an intelligent choice. I didn't need psi to tell me that. Damien Broderick From spike66 at att.net Sun Jan 31 06:28:18 2010 From: spike66 at att.net (spike) Date: Sat, 30 Jan 2010 22:28:18 -0800 Subject: [ExI] Psi and gullibility In-Reply-To: <4B6518C4.8030905@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com><40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> <4B647F18.808@satx.rr.com><324A223F0E0448628A5A7E8CD84F1ABD@spike> <4B6518C4.8030905@satx.rr.com> Message-ID: > ...On Behalf Of Damien Broderick > Subject: Re: [ExI] Psi and gullibility > > On 1/30/2010 5:23 PM, spike wrote: > > > I choose some numbers and > > purchase the ticket, my pseudopsi assures me I just picked > a loser. > > Works every time. > > And yet you keep buying lottery tickets? This is not the mark > of an intelligent choice. I didn't need psi to tell me that. > > Damien Broderick Actually it was a gag. I've never bought a lottery ticket. The discussion reminds me of one that I have had many times with family members regarding biblical prophecy, and how the prophetic writings foretell the future. My argument is that in no cases, exactly zero cases, was anyone ever able to read a biblical passage and know in advance what was to happen. Only after the fact were the events interpreted as being a fulfillment, and even then only by having the history of the events rewritten to fit the expected outcome. spike From hkeithhenson at gmail.com Sun Jan 31 09:52:20 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 31 Jan 2010 02:52:20 -0700 Subject: [ExI] Glacier Geoengineering Message-ID: The object is to freeze a glacier to bedrock. The average heat flow over the entire Earth is 87 mW ? m-2 http://www.answers.com/topic/earth-heat-flow-in http://geophysics.ou.edu/geomechanics/notes/heatflow/global_heat_flow.htm Call it 100-mW/square meter, tenth of a watt. A square km would have a heat flux of 100,000 watts or 100 kW Propane absorbs 428 kJ/kg evaporating. It boils at one atmosphere at -43 deg C. (Propylene boils about 10 deg colder so might use that instead.) http://www.engineeringtoolbox.com/fluids-evaporation-latent-heat-d_147.html So to pull 100 kW (a hundred kJ in a second) would take about 1/4 kg of propane per second. That's 15 kg/minute. Propane has half the density of water, so it would be in the range of 30 l/minute going down the hole. Coming back up as vapor, a cubic meter has a mass of about 1.9 kg/cubic meter, so 15 kg/minute would be ~eight cubic meters per minute, or half a cubic meter per second. Eight m/sec is an ok rate for gases so an area of 1/16 square meter is enough. As square pipe, it would be 250 cm square. So something like 300 cm pipe would be large enough. Air (at -50 C) will carry away heat at about 1.5 kJ/m^3 per deg K. For a five deg temperature difference between the air and the propane, air will carry away about 7.5 kJ/m^3. For 100 kW, that's 13 cubic meters per second or 48,000 cubic meters per hour. If 10 km/hr is the average air speed through the heat exchanger, then the cooling air intake area would need to be 4.8 square meters. I think one per square km isn't close enough to freeze the water at the bottom of a glacier to the rock. They might be put in at 300 meter intervals. (I don't have the patience to find and run a heat flow program.) Because the heat pipe only works part of the year, the system would have to be doubled or tripled in scale. This is only enough to take out the heat coming out of the earth. Probably need it somewhat larger to pull the huge masses of ice in a few decades down to a temperature where they would flow much slower. Glaciers cover about the same percentage of the earth as farmland. I don't know how much of them would have to be blocked to slow them down, perhaps 5-10 percent of the area. Keith From bbenzai at yahoo.com Sun Jan 31 11:44:14 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 31 Jan 2010 03:44:14 -0800 (PST) Subject: [ExI] How to ground a symbol In-Reply-To: Message-ID: <814907.90930.qm@web113613.mail.gq1.yahoo.com> Gordon wrote: "We cannot first understand the meaning of a symbol from looking only at its form" Obviously not. And: "We must learn the meaning in some other way, and attach that meaning to the form, such that we can subsequently recognize that form and know the meaning" Indeed. There's only one way I can think of to do this, and that's through association with sensory data (or more accurately, association with a set of abstracted commonalities in a set of mind-states produced in conjunction with the reception of sensory data, but that's a bit of a mouthful). The word "Red" written in a dictionary (or as a piece of data in a computer memory, or a pattern of neuron firings in some part of a brain) is meaningless on it's own. Of course. A system that associates the word "Red" with the various states produced within itself whenever its sensory apparatus recieves light in a particular range of wavelengths, or when it recreates some of these states from previously-stored data (remembering), thereby assigns a meaning to the word. "Red" becomes a shorthand for an abstracted set of common elements in these states. This is the training phase, when it extracts commonalities from a large set of examples. Artificial neural nets and learning algorithms need to go through this phase, and so do babies. As far as I can see, this is the only meaning that "symbol grounding" can possibly have, and any system of sufficient complexity, with sensory inputs, memory storage, pattern matching methods and training data, can do it. It makes no difference whether that system is biological, electromechanical, digital, analogue, stones in grooves in a vast desert, or charged particles in a system of magnetic fields. It's the processing of sensory information that matters. In future, whenever the system sees a rose, it will know whether it's a red rose or not, because there'll be a part of its internal state that matches the symbol "Red". If it's running the correct kind of pattern-matching algorithms, it will recognise this instantly, and know that the rose is "Red". This also explains why we can use the same word for slightly different things. One system can be exposed to lots of cyan things during its development, and taught to use the word "Blue", another may be exposed to lots of spectrum blue things, and associate the same word. They will both use the same word for the same general end of the spectrum, but may later argue whether that girl's dress is actually "Blue" or "Green". This happens with humans all the time, and I fully expect it to happen with AIs. Ben Zaiboc From nebathenemi at yahoo.co.uk Sun Jan 31 15:09:10 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sun, 31 Jan 2010 15:09:10 +0000 (GMT) Subject: [ExI] 1984 and Brave New World In-Reply-To: Message-ID: <12411.23612.qm@web27003.mail.ukl.yahoo.com> 1984 and Brave New World are both reflections on the dystopian consequences of some of the utopian thinking doing the rounds at the time they were written. 1984 reflects "The Soviets were our allies against Hitler, and they are delivering a Communist revolution to the world! This is great for Socialism!" Orwell drives home how horrifying Stalinist communism is to anyone who peers beneath the surface, and warns against totalitarianism. As a work of futurology, it did give us a clue as to how bad East Germany could get under a regime where the Stasi employed 1 in 5 of the population as an informant. In fact, 1984 probably did seem contemporary in Berlin in the 80s. Brave New World reflects the utopian thinking of those who believed a technocratic elite could bestow happiness for all, and its focus on biological engineering of people and society reflects the early 20th century eugenicists. In a time when people were publicly advocating the sterilisation of undesirable types, and where people were using dubious biology to push forward their own political views, Huxley warns us of one way in which this could end up. In our modern time, we think of 1984 as old-fashioned - our politicians have found it's better to sell us surveillance as a policing and counter-terrorism tool to help us feel safer, and they sell war with the co-operation of a willing media (as the Iraq war inquiry in Britain is showing). In our time of rapidly advancing biotechnology, Brave New World retains its power to shock. Hopefully, we live in less class-obsessed times and slightly less racist times than Huxley (or at least we're all aware that genocide is not acceptable in polite company), but we still have to consider how our technological choices will affect our society. There are some transhumanists out there who I wouldn't let within a mile of any political policy decisions. Tom From jonkc at bellsouth.net Sun Jan 31 16:08:25 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 31 Jan 2010 11:08:25 -0500 Subject: [ExI] Psi and gullibility In-Reply-To: <4B647F18.808@satx.rr.com> References: <719643.84470.qm@web110416.mail.gq1.yahoo.com> <4B5C7B99.6050204@satx.rr.com> <4B5CA6F4.5040506@satx.rr.com> <4B5DC8E9.3060609@satx.rr.com> <4B5E0176.4050507@satx.rr.com> <4B5E443A.2070709@satx.rr.com> <4B5F3D60.90002@satx.rr.com> <40CB579B-8022-4515-B57C-0A23D0B0B0BB@bellsouth.net> <4B647F18.808@satx.rr.com> Message-ID: <3E525396-F780-4E56-951F-A88F3228EA83@bellsouth.net> On Jan 30, 2010, Damien Broderick wrote: > there does seem to be a lot of evidence of successful precognition, in the lab, of quantum-random-driven outcomes And by "evidence" you mean stuff somebody typed onto a website that you'd never heard of before it was discovered by Google. I know how to type too so I'm not impressed. > If you're so psychic, why don't you win the lottery? Damn good question. > lotteries might be almost totally unpredictable unless they are rigged. Damn good answer. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Jan 31 17:25:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 09:25:45 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100130235127.5.qmail@syzygy.com> Message-ID: <225359.67975.qm@web36501.mail.mud.yahoo.com> --- On Sat, 1/30/10, Eric Messick wrote: > In the referenced paper, Searle says that weak AI would be > a useful tool for understanding intelligence, while strong AI would > duplicate intelligence.? We might reasonably attribute intelligence to both strong and weak AI systems. However for a system to have strong AI it must also have intentional states defined as conscious thoughts, beliefs, hopes, desires and so on. It must have a subjective conscious mind in the sense that you, Eric, have a mind. > I claim (and I expect you would dispute) that an accurate > neural level simulation of a healthy human brain would constitute > strong AI. I dispute that, yes, if the simulation consists of software running on hardware. > Assuming that such a simulation accurately reproduced > responses of an intelligent human (it passes the Turing Test), > I'm going to guess that you'd grant it weak AI status, but not strong > AI status. Right. > Furthermore, you seem to be asserting that no test based on > it's behavior could ever convince you to grant it strong > status. Right. Such a system might at first fool me into believing it had strong AI status. I would however discover the deception if I obtained knowledge of its inner workings and found the architecture of a software/hardware system running formal programs as such systems exist today. I would then demote the system to weak AI status. > Let's go a step farther and place the computer running this > simulation within the skull of the person we have duplicated, > replacing their brain.? It's connected with all of the neurons which > used to feed into the brain. > Now, what you have is a human body which behaves completely > normally. Still weak AI. > I present you with two humans, one of which has had this > operation performed, and the other of which hasn't.? Both claim > to be the one who hasn't, but of course one of them is lying > (or perhaps mistaken). > > How could you tell which is which? Exploratory surgery. > This is of course a variant of the classic Turing Test, and > we've already stipulated that this simulation passes the Turing > Test. > > So, can you tell the difference? I can't know the difference from their external behavior, but I can know it from a bit of surgery + some philosophical arguments. > Or do you claim that it will always be impossible to create > such a simulation in the first place?? No, wait, you've > already said that systems that pass the Turing Test will be possible, > so you're no longer claiming that it is impossible.? Do you want to > change your mind on that again? Excuse me? I never argued for the impossibility of such systems and I have not "changed my mind" about this. I wonder now if I can count on you for an honest discussion. What I have claimed several times is that the Turing test will give false positives for the simulation. -gts From gts_2000 at yahoo.com Sun Jan 31 17:42:38 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 09:42:38 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) Message-ID: <491598.82004.qm@web36508.mail.mud.yahoo.com> --- On Sat, 1/30/10, Stathis Papaioannou wrote: >> He no longer exists as part of a larger system that >> might understand the symbols, unless you want to step foot >> into the domain of religion and claim that some god >> understands the symbols that he cannot understand. Is that >> your claim? > He is the whole system, but his intelligence is only a > small and inessential part of the system, as it could easily > be replaced by dumber components. Show me who or what has conscious understanding of the symbols. > It's irrelevant that the man doesn't really > understand what he is doing. The ensemble of neurons doesn't > understand what it's doing either, and they are the whole system too. I have no objection to your saying that neither the system nor anything contained in it has conscious understanding, but in that case you need to understand that you don't disagree with me; you don't believe in strong AI any more than I do. -gts From jonkc at bellsouth.net Sun Jan 31 17:58:41 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 31 Jan 2010 12:58:41 -0500 Subject: [ExI] The digital nature of brains. In-Reply-To: <225359.67975.qm@web36501.mail.mud.yahoo.com> References: <225359.67975.qm@web36501.mail.mud.yahoo.com> Message-ID: Eric Messick wrote: >> you seem to be asserting that no test based on it's behavior could ever convince you to grant it strong status. > Gordon Swobe wrote: > Right. And with that one word you place yourself firmly in the creationists camp because if that is indeed "right" then there is no way, absolutely positively no way Evolution could have produced consciousness, and yet consciousness has been produced at least once and probably many billions of times. > Such a system might at first fool me into believing it had strong AI status. I would however discover the deception if I obtained knowledge of its inner workings That nicely illustrates what's so inconsistent in your position. You say that however intelligent a computer is you wouldn't consider it conscious because you, Gordon Swobe, can't figure out how the machine could do that; but you admit that you don't understand how human beings do that either, and yet for reasons never explained you still think they are conscious. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at m056832107.syzygy.com Sun Jan 31 18:29:26 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 31 Jan 2010 18:29:26 -0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <225359.67975.qm@web36501.mail.mud.yahoo.com> References: <20100130235127.5.qmail@syzygy.com> <225359.67975.qm@web36501.mail.mud.yahoo.com> Message-ID: <20100131182926.5.qmail@syzygy.com> Gordon: >Eric: >> So, can you tell the difference? > >I can't know the difference from their external behavior, but I can > know it from a bit of surgery + some philosophical arguments. In other words, if you were to directly experience something which contradicts your philosophical arguments, you would believe the philosophy over the reality. I guess heavy objects must fall faster than light ones. Let's not bother to do the experiment. >> Or do you claim that it will always be impossible to create >> such a simulation in the first place? No, wait, you've >> already said that systems that pass the Turing Test will be possible, >> so you're no longer claiming that it is impossible. Do you want to >> change your mind on that again? > >Excuse me? I never argued for the impossibility of such systems and I > have not "changed my mind" about this. I wonder now if I can count on > you for an honest discussion. Going through old messages, the first I found that fit my memory of this was: >Message-ID: <845939.46868.qm at web36506.mail.mud.yahoo.com> >Date: Mon, 28 Dec 2009 04:47:32 -0800 (PST) >From: Gordon Swobe >To: ExI chat list >Subject: Re: [ExI] The symbol grounding problem in strong AI > >--- On Sun, 12/27/09, Stathis Papaioannou wrote: >[...] >> If the replacement neurons behave normally in their >> interactions with the remaining brain, then the subject *must* >> behave normally. > >But your replacement neurons *won't* behave normally, and so your >possible conclusions don't follow. >[...] This was the start of a series of posts where you said that someone with a brain that had been partially replaced with programmatic neurons would behave as though he was at least partially not conscious. You claimed that the surgeon would have to replace more and more of the brain until he behaved as though he was conscious, but had been zombified by extensive replacement. You were, in essence, claiming that it was impossible to create programmatic neurons which would have the same behavior as biological neurons. I wonder now if I can count on you for an honest discussion. -eric From alfio.puglisi at gmail.com Sun Jan 31 19:01:21 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sun, 31 Jan 2010 20:01:21 +0100 Subject: [ExI] Glacier Geoengineering In-Reply-To: References: Message-ID: <4902d9991001311101i5a719fb1o5182295e51424100@mail.gmail.com> On Sun, Jan 31, 2010 at 10:52 AM, Keith Henson wrote: > The object is to freeze a glacier to bedrock. > > The average heat flow over the entire Earth is 87 mW ? m-2 > > http://www.answers.com/topic/earth-heat-flow-in > > http://geophysics.ou.edu/geomechanics/notes/heatflow/global_heat_flow.htm > > Call it 100-mW/square meter, tenth of a watt. > > A square km would have a heat flux of 100,000 watts or 100 kW > > Propane absorbs 428 kJ/kg evaporating. It boils at one atmosphere at > -43 deg C. (Propylene boils about 10 deg colder so might use that > instead.) > > http://www.engineeringtoolbox.com/fluids-evaporation-latent-heat-d_147.html > > So to pull 100 kW (a hundred kJ in a second) would take about 1/4 kg > of propane per second. That's 15 kg/minute. Propane has half the > density of water, so it would be in the range of 30 l/minute going > down the hole. Coming back up as vapor, a cubic meter has a mass of > about 1.9 kg/cubic meter, so 15 kg/minute would be ~eight cubic meters > per minute, or half a cubic meter per second. > Temperatures at the glacier-bedrock interface can be amazingly high. This article talks about bedrock *welding* with temperatures higher than 1,000 Celsius: http://jgs.lyellcollection.org/cgi/content/abstract/163/3/417 I guess the energy comes from the potential energy of the ice sliding down the terrain. This is only enough to take out the heat coming out of the earth. Probably > need it somewhat > larger to pull the huge masses of ice in a few decades down to a > temperature where they would flow much slower. > If one also needs to remove the heat generated gravitationally, this could be potentially much larger than just the Earth's heat flux. > > Glaciers cover about the same percentage of the earth as farmland. I > don't know how much of them would have to be blocked to slow them > down, perhaps 5-10 percent of the area. > > Keith > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Jan 31 19:22:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 11:22:33 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <20100131182926.5.qmail@syzygy.com> Message-ID: <732400.27938.qm@web36501.mail.mud.yahoo.com> --- On Sun, 1/31/10, Eric Messick wrote: >> I can't know the difference from their external behavior, but I can >> know it from a bit of surgery + some philosophical >> arguments. > > In other words, if you were to directly experience > something which contradicts your philosophical arguments, you would > believe the philosophy over the reality. Looks to me like you want put words in my mouth and that you don't want or perhaps don't know to have a fair an honest discussion. You're losing credibility with me fast. >> Excuse me? I never argued for the impossibility of such >> systems and I have not "changed my mind" about this. I wonder now if >> I can count on you for an honest discussion. > > Going through old messages, the first I found that fit my > memory of this was: > > >Message-ID: <845939.46868.qm at web36506.mail.mud.yahoo.com> > >Date: Mon, 28 Dec 2009 04:47:32 -0800 (PST) > >From: Gordon Swobe > >To: ExI chat list > >Subject: Re: [ExI] The symbol grounding problem in > strong AI > > > >--- On Sun, 12/27/09, Stathis Papaioannou > wrote: > >[...] > >> If the replacement neurons behave normally in > their > >> interactions with the remaining brain, then the > subject *must* > >> behave normally. > > > >But your replacement neurons *won't* behave normally, > and so your > >possible conclusions don't follow. > >[...] > > This was the start of a series of posts where you said that > someone with a brain that had been partially replaced with > programmatic neurons would behave as though he was at least partially > not conscious.? You claimed that the surgeon would have to > replace more and more of the brain until he behaved as though he was > conscious, but had been zombified by extensive replacement. Right, and Stathis' subject will eventually pass the TT just as your subject will in your thought experiment. But in both cases the TT will give false positives. The subjects will have no real first-person conscious intentional states. -gts From gts_2000 at yahoo.com Sun Jan 31 20:17:25 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 12:17:25 -0800 (PST) Subject: [ExI] How to ground a symbol In-Reply-To: <814907.90930.qm@web113613.mail.gq1.yahoo.com> Message-ID: <304772.53589.qm@web36501.mail.mud.yahoo.com> --- On Sun, 1/31/10, Ben Zaiboc wrote: > Indeed. > There's only one way I can think of to do this, and that's > through association with sensory data (or more accurately, > association with a set of abstracted commonalities in a set > of mind-states produced in conjunction with the reception of > sensory data, but that's a bit of a mouthful). I posted this link below once already, to Stathis, but I haven't had much time or inclination to follow up on it. Not sure if it will make sense to anyone who does not already acknowledge the significance of the basic CRA. However it makes for relatively light reading and includes some animations to make the points clear. Let me know what you think. http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php -gts From gts_2000 at yahoo.com Sun Jan 31 20:01:12 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 12:01:12 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <394481.10295.qm@web36506.mail.mud.yahoo.com> --- On Sun, 1/31/10, Spencer Campbell wrote: > A healthy human brain has intentional states defined as > conscious thoughts, beliefs, hopes, desires and so on. Absolutely! > An accurate neural level simulation of a healthy human > brain would, therefore, replicate those states. Otherwise it would not, > by definition, be accurate. Digital simulations of non-digital objects only *model* those things they simulate. They do not equal the things they model. To get this wrong is to confuse the model with reality, the description with the thing described, the book for the subject of the book, the simulation of the reality with the reality it simulates, the computation with the thing computed. However a digital simulation of X will have all the real properties of X *if and only if* X already exists as a digital object. But in that case we should call that simulation of X a copy or a duplication of X, not a simulation of X. Simulations of things never equal the things they simulate, but copies do. Whether people here in extropyland realize it or not, digital models of human brains will have the real properties of natural brains if and only if natural brains already exist as digital objects, i.e, only if the human brain already exists in reality as a digital computer running software. > I was with Eric until he said this, then switched > allegiance again. From my perspective, Gordon has been very > consistent Thanks for saying that. It warms my heart. :) > In this thought experiment, Searle has "internalized" the > algorithm that he was using in the Chinese room. In effect, Searle is > now a system containing a virtual Chinese room. You could say that. > The virtual Stathis in my head says that the virtual > Chinese room is what has conscious understanding of the symbols. That virtual room exists somewhere in the man's brain/mind, and he has access to it such that he can follow the syntactic rules for manipulating the symbols well enough to pass the Turing test in Chinese. Why doesn't he also have access to its supposed intentional states so that he can understand the symbols? If he cannot access that understanding in his own head then it seems to me that we've just imagined something in a futile attempt to escape the conclusion that the man just plain cannot understand the symbols. -gts From gts_2000 at yahoo.com Sun Jan 31 21:19:17 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 13:19:17 -0800 (PST) Subject: [ExI] How to ground a symbol In-Reply-To: <814907.90930.qm@web113613.mail.gq1.yahoo.com> Message-ID: <589903.82027.qm@web36507.mail.mud.yahoo.com> --- On Sun, 1/31/10, Ben Zaiboc wrote: > In future, whenever the system sees a rose, it will know > whether it's a red rose or not, because there'll be a part > of its internal state that matches the symbol "Red".? The system you describe won't really "know" it is red. It will merely act as if it knows it is red, no different from, say, an automated camera that acts as if it knows the light level in the room and automatically adjusts for it. -gts From eric at m056832107.syzygy.com Sun Jan 31 23:05:39 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 31 Jan 2010 23:05:39 -0000 Subject: [ExI] How to ground a symbol In-Reply-To: <304772.53589.qm@web36501.mail.mud.yahoo.com> References: <814907.90930.qm@web113613.mail.gq1.yahoo.com> <304772.53589.qm@web36501.mail.mud.yahoo.com> Message-ID: <20100131230539.5.qmail@syzygy.com> Gordon sends us this link: >http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php which contains this text (written by David L Anderson): > In one of the books, there will be a sentence written in English > that says: > > "If you receive this string of shapes: 01010111011010000110000101110100, > 0110100101110011, 01100001, 011100000110100101100111, > > then send out this string of shapes: 010000001, 011100000110100101100111, > 0110100101110011, 01100001, 0110001 > 00110000101110010011011100111100101100001011100100100100, > 01100001011011100110100101101101 0110000101101100" The animations and other text at the site all indicate that this is the type of processing going on in Chinese rooms. Now, I don't know if Searle was involved in this project, and Gordon hasn't even indicated that he agrees with it, so perhaps this is just what David Anderson thinks. If this is the extent of what Chinese room supporters think computers are capable of, then it's not surprising that they don't consider them capable of understanding. I think the proper reply to this is: Come back after you've written a neural network simulator and trained it to do something useful. Then we'll see if your intuition still says that computers can't understand anything. Neural networks operate *nothing* like the above set of if-then statements. Sure, you've got something Turing complete under the neural network layer of abstraction, but you've got dumb chemical reactions under the functioning of a neuron. What matters is the action at the higher layer of abstraction. Once again, I wonder if the problem here is an inability to deal with abstractions. Can we test for that ability, teach it, or enhance it? Is it just a selective inability to deal with particular abstractions? Perhaps with a particular class of abstraction? -eric From gts_2000 at yahoo.com Sun Jan 31 23:45:30 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 15:45:30 -0800 (PST) Subject: [ExI] How to ground a symbol In-Reply-To: <20100131230539.5.qmail@syzygy.com> Message-ID: <975270.46265.qm@web36504.mail.mud.yahoo.com> --- On Sun, 1/31/10, Eric Messick wrote: > The animations and other text at the site all indicate that > this is the type of processing going on in Chinese rooms. This kind of processing goes on in every software/hardware system. > Come back after you've written a neural network > simulator and trained it to do something useful.? Philosophers of mind don't care much about how "useful" it may seem. They do care if it has a mind capable of having conscious intentional states: thoughts, beliefs, desires and so on as I've already explained. I think artificial neural networks show great promise as decision making tools. But 100 billion * 0 = 0. -gts From lacertilian at gmail.com Sat Jan 30 18:27:18 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 30 Jan 2010 10:27:18 -0800 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <713314.67215.qm@web36501.mail.mud.yahoo.com> References: <713314.67215.qm@web36501.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > I don't disagree but it misses the point. In Searle's reply to his systems critics, he becomes the system and *neither he nor anything inside him* can understand the symbols. You reply "Yeah well neurons don't know anything either but the system does". Do you see how that misses the point? *We can no longer compare the man to a neuron in a larger system*. We cannot do so because the man becomes the entire system, and his neurons lack understanding just as he does. > > He no longer exists as part of a larger system that might understand the symbols, unless you want to step foot into the domain of religion and claim that some god understands the symbols that he cannot understand. Is that your claim? What makes you assert that nothing inside him understands the symbols? It's very obvious that Searle becomes the entire system, and equally obvious that he is no longer part of a larger system which may or may not have understanding. However, this does not exclude the possibility that Searle is now a larger system containing a smaller system which understands symbols. Such as, for example, the Earth, or a busy office building. One could even make the case for Congress. The argument to refute is this: One human being may contain more than one mind. From lacertilian at gmail.com Sun Jan 31 02:52:06 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 30 Jan 2010 18:52:06 -0800 Subject: [ExI] The nature of intelligence (was: predictive neurons?) Message-ID: Mike Dougherty : > I think calling intelligence a feature or property of the environment > makes it more like the situation where gravity is the deformation of > spacetime topology (or vice-versa) - so how might we improve > intelligence? ?Alter the environment? ?Is it a recursive process? That's precisely the principle behind formal education, so... inconclusive. My definition of intelligence is relative to the possibly-intelligent object in question, and presupposes that the object has a definite purpose: Intelligence is inversely proportional to the average time required before the object achieves its purpose, assuming that it encounters every possible obstacle on the way and that none of those obstacles are insurmountable to the object. I'll note that "obstacle" is a hypernym for "problem" in my mind. This implies that "problem-solving ability" is a less complete, but still correct, interpretation of intelligence. Things get a little more complicated for humans and other animals, whose purposes tend to change rather frequently. The easiest solution is to say that my measured intelligence changes depending on what goal I have in mind, which seems consistent with reality: I'm brilliant when it comes to eating three meals a day, yet functionally retarded when it comes to navigating the public transit system. Behavior may be either very stupid or very ingenious, depending on what you assume it's meant to accomplish. A corollary: unless the number of possible teleological states of a given system is finite, it is not possible to measure the absolute intelligence of that system. My only problem with IQ tests is that they don't specify what they're measuring, save for "the ability to score well on IQ tests". From lacertilian at gmail.com Sun Jan 31 03:07:58 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 30 Jan 2010 19:07:58 -0800 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) Message-ID: Stathis Papaioannou : >Gordon Swobe : >> A3: syntax is neither constitutive of nor sufficient for semantics. >> >> It's because of A3 that the man in the room cannot understand the symbols. I started the robot thread to discuss the addition of sense data on the mistaken belief that you had finally recognized the truth of that axiom. Do you recognize it now? > > No, I assert the very opposite: that meaning is nothing but the > association of one input with another input. You posit that there is a > magical extra step, which is completely useless and undetectable by > any means. Crap! Now I'm doing it too. This whole discussion is just an absurdly complex feedback loop, neither positive nor negative. It will never get better and it will never end. Yet the subject matter is interesting, and I am helpless to resist. First, yes, I agree with Stathis's assertion that association of one input with another input, or with another output, or, generally, of one datum with another datum, is the very definition of meaning. Literally, "A means B". This is mathematically equivalent to, "A equals B". Smoke equals fire, therefore, if smoke is true or fire is true then both are true. This is very bad reasoning, and very human. Nevertheless, we can say that there is a semantic association between smoke and fire. Of course the definitions of semantics and syntax seem to have become deranged somewhere along the lines, so someone with a different interpretation of their meaning than I have may very well leap at the chance to rub my face in it here. This is a risk I am willing to take. So! To see a computer's idea of semantics one might look at file formats. An image can be represented in BMP or PNG format, but in either case it is the same image; both files have the same meaning, though the manner in which that meaning is represented differs radically, just as 10/12 differs from 5 * 6^-1. Another source might be desktop shortcuts. You double-click the icon for the terrible browser of your choice, and your computer takes this to mean instead that you are double-clicking an EXE file in a completely different place. Note that I could very naturally insert the word "mean" there, implying a semantic association. Neither of these are nearly so human a use of semantics, because the relationship in each case is literal, not causal. However, it is still semantics: an association between two pieces of information. Gordon has no beef with a machine that produces intelligent behavior through semantic processes, only with one that produces the same behavior through syntax alone. At this point, though, his argument becomes rather hazy to me. How can anything even resembling human intelligence be produced without semantic association? A common feature in Searle's thought experiments, and in Gordon's by extension, is that there is a very poor description of the exact process by which a conversational computer determines how to respond to any given statement. This is necessary to some extent, because if anyone could give a precise description of the program that passes the Turing test, well, they could just write it. In any case, there's just no excuse to describe that program with rules like: if I hear "What is a pig?" then I will say "A farm animal". Sure, some people give that response to that question some of the time. But if you ask it twice in a row to the same person, you will get dramatically different answers each time. It's a gross oversimplification, but I'm forced to admit that it is technically valid if one views it only as what will happen, from a very high-level perspective, if "What is a pig?" is the very next thing the Chinese Room is asked. A whole new lineup of rules like that would be have to be generated after each response. Not a very practical solution. Effective, but not efficient. However, it seems to me that even if we had the brute processing power to implement a system like that while keeping it realistically quick-witted, it would still be impossible to generate that rule without the program containing at least one semantic fact, namely, "pig = farm animal". The only part syntactical rules play in this scenario is to insert the word "a" at the beginning of the sentence. Syntax is concerned only with grammatical correctness. Using syntax alone, one might imagine that the answer would be "a noun": the place at which "pig" occurs in the sentence implies that the word must be a noun, and this is as close as a syntactical rule can come to showing similarity between two symbols. If the grammar in question doesn't explicitly provide categories for symbols, as in English, then not even this can be done, and a meaningful syntax-based response is completely impossible. I started on this message to point out that Stathis had completely missed the point of A3, but sure enough I ended up picking on Searle (and Gordon) as well. In the end, I would like to make the claim: syntax implies semantics, and semantics implies syntax. One cannot find either in isolation, except in the realm of one's imagination. Like so many other divisions imposed between natural (that is, non-imaginary) phenomena, this one is valid but false. From lacertilian at gmail.com Sun Jan 31 03:59:20 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 30 Jan 2010 19:59:20 -0800 Subject: [ExI] The nature of intelligence Message-ID: Spencer Campbell : > Intelligence is inversely proportional to the average time required > blah blah blah Wait a minute! What about two solutions that take equal time, but non-equal energy? Speed is not everything. It should be average time * energy required, i.e., average ACTION required. This is a much better definition. Now the concept of "intelligent action" is mathematically coherent for me. Joy of joys! From lacertilian at gmail.com Sun Jan 31 17:56:51 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 31 Jan 2010 09:56:51 -0800 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <225359.67975.qm@web36501.mail.mud.yahoo.com> References: <20100130235127.5.qmail@syzygy.com> <225359.67975.qm@web36501.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > We might reasonably attribute intelligence to both strong and weak AI systems. However for a system to have strong AI it must also have intentional states defined as conscious thoughts, beliefs, hopes, desires and so on. It must have a subjective conscious mind in the sense that you, Eric, have a mind. >Eric Messick : >> I claim (and I expect you would dispute) that an accurate >> neural level simulation of a healthy human brain would constitute >> strong AI. > > I dispute that, yes, if the simulation consists of software running on hardware. A healthy human brain has intentional states defined as conscious thoughts, beliefs, hopes, desires and so on. It has a subjective conscious mind in the sense that he, Eric, has a mind. An accurate neural level simulation of a healthy human brain would, therefore, replicate those states. Otherwise it would not, by definition, be accurate. Gordon Swobe : >Eric Messick : >> Or do you claim that it will always be impossible to create >> such a simulation in the first place?? No, wait, you've >> already said that systems that pass the Turing Test will be possible, >> so you're no longer claiming that it is impossible.? Do you want to >> change your mind on that again? > > Excuse me? I never argued for the impossibility of such systems and I have not "changed my mind" about this. I wonder now if I can count on you for an honest discussion. I was with Eric until he said this, then switched allegiance again. >From my perspective, Gordon has been very consistent when it comes to what will and will not pass the Turing test. His arguments, implicitly or explicitly, state that the Turing test does not measure consciousness. This is one point on which he and I agree. Gordon Swobe : >Stathis Papaioannou : >> He is the whole system, but his intelligence is only a >> small and inessential part of the system, as it could easily >> be replaced by dumber components. > > Show me who or what has conscious understanding of the symbols. In this thought experiment, Searle has "internalized" the algorithm that he was using in the Chinese room. In effect, Searle is now a system containing a virtual Chinese room. The virtual Stathis in my head says that the virtual Chinese room is what has conscious understanding of the symbols. I'm inclined to agree, assuming that the Chinese room does indeed pass the Turing test, except that I would not specify "conscious" understanding. I'm not convinced that consciousness and understanding are inseparable. My unconscious mind seems to understand easily enough that it's important to keep my heart beating at a regular rate, and I'm not inclined to criticize it solely on the basis that it has no awareness (therefore, consciousness) of that understanding. From lacertilian at gmail.com Sun Jan 31 21:53:31 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 31 Jan 2010 13:53:31 -0800 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <394481.10295.qm@web36506.mail.mud.yahoo.com> References: <394481.10295.qm@web36506.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Whether people here in extropyland realize it or not, digital models of human brains will have the real properties of natural brains if and only if natural brains already exist as digital objects, i.e, only if the human brain already exists in reality as a digital computer running software. Yeah, walked right into that one. A few questions to see if I understand where you're coming from. How would one determine, in practice, whether or not any given information processor is a digital computer? Is it accurate to say that two digital computers, networked together, may themselves constitute a larger digital computer? Is the Internet a digital computer? Or, equivalently, depending on your definition of the Internet: Is the Internet a piece of software running on a digital computer? Finally, would you say that an artificial neural network is a digital computer? Gordon Swobe : > Why doesn't he also have access to its supposed intentional states so that he can understand the symbols? If he cannot access that understanding in his own head then it seems to me that we've just imagined something in a futile attempt to escape the conclusion that the man just plain cannot understand the symbols. We haven't escaped any such conclusion: the man still has no understanding of the symbols. He doesn't have access to the virtual Chinese room's intentional states for the same reason that an instance of Minesweeper doesn't have access to a simultaneous instance of Solitaire. It's not that it's physically impossible, it's just that the two entities aren't built to interact in that way. In fact, in the man's case he was specifically engineered to lack that access. I could also say that the Earth doesn't have access to my intentional states, but it's a much more metaphorical analogy because the Earth is not instantiating me as software; we're on the same level of abstraction. Ontologically independent. If I exist then I exist irrespective of the Earth's existence, though I might not last that long if you take it out from under me. From lacertilian at gmail.com Sun Jan 31 17:31:44 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 31 Jan 2010 09:31:44 -0800 Subject: [ExI] Glacier Geoengineering In-Reply-To: References: Message-ID: Keith Henson : > The object is to freeze a glacier to bedrock. This is something you came up with? I assume the purpose is to combat global climate change. If that's so, it's an interesting idea, but pretty obviously unwise in practice. Alarmingly fast glacier movement is only a symptom, for one thing; slowing it down would buy us some time, but I suspect that not even stopping the glaciers completely could stabilize the system. The most obvious problem is that we'd have to keep pumping the coolant down there forever. After the initial freeze I suppose we could slow down, but stop entirely and the whole thing just starts up again. Causing artificial phytoplankton blooms remains my favorite geoengineering project. The mathematics going into glacier freezing are way more fun, though.