From stathisp at gmail.com Fri Jan 1 00:50:37 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jan 2010 11:50:37 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <776662.76806.qm@web36506.mail.mud.yahoo.com> References: <776662.76806.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/1 Gordon Swobe : >> The brain or computer is the physical object instantiating the mind, >> like a sphere made of stone is a physical instantiation of an abstract >> sphere. You can destroy a physical sphere but you can't destroy the >> abstract sphere > > It seems then that you suppose yourself as possessing or equaling this "abstract sphere of mind" that your brain instantiated, and that you suppose further that this abstract sphere of yours will continue to exist after your body dies. Correct me if I'm wrong. After I die my mind can be instantiated again multiple times, with different matter. If the brain were identical with the mind this would not be possible: a copy of a brain, however faithful, is a different physical object. The function of the brain can be reproduced, but not the brain itself, and this is consistent with the mind being reproducible and being a function of the brain. These metaphysical musings are interesting but have no bearing on the rigorous argument presented before, which showed that whatever the mind is, if the function of the device generating it is reproduced then the mind is also reproduced. You did not come up with a rebuttal, other than to put forward your feeling that some magic would happen to stop it being true. -- Stathis Papaioannou From stathisp at gmail.com Fri Jan 1 01:32:36 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jan 2010 12:32:36 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> Message-ID: 2009/12/31 scerir : > [Stathis] > We can compute probabilistic answers, often with high certainty, where true > randomness is evolved (eg. I predict that I won't quantum tunnel to the other > side of the Earth), or we can use pseudorandom number generators. I don't think > anyone has shown a situation where true random can be distinguished from > pseudorandom, but even if that should be a stumbling block in simulating a > brain, it would be possible to bypass it by including a true random source, > such as radioactive decay, in the machine. > > # > > To my knowledge there are: > -pseudo-randomness, which is computable and deterministic; specific softwares > are the sources. > -quantum randomness, which is uncomputable (not by definition, but because of > theorems; no Turing machine can enumerate an infinity of correct bits of the > sequence produced by a quantum device); there are several sources (radioactive > decays; arrival times; beam splitters; metastable states decay; etc.) > -algorithmic randomness, which is uncomputable (I would say by definition). There is a way to produce algorithmic randomness with a Turing machine, requiring something of a trick. The Turing machine runs a program which generates a virtual world containing an observer, and the observer has a piece of paper with a bitstring written on it. At regular intervals the program duplicates the entire virtual world including the observer, but in one copy appends 1 to the bitstring and in the other appends 0. From the point of view of the observer, whether the next bit will be a 1 or a 0 is indeterminate, and the bitstring he has so far truly random. This is despite the fact that the program is completely deterministic from the perspective of an outside observer. It will be obvious that this is a model of quantum randomness under the MWI of QM: God does not play dice, but his creatures do. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Jan 1 02:52:32 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 31 Dec 2009 18:52:32 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <874371.98899.qm@web36502.mail.mud.yahoo.com> --- On Thu, 12/31/09, Stathis Papaioannou wrote: >> It seems then that you suppose yourself as possessing >> or equaling this "abstract sphere of mind" that your brain >> instantiated, and that you suppose further that this >> abstract sphere of yours will continue to exist after your >> body dies. Correct me if I'm wrong. > > After I die my mind can be instantiated again multiple > times, with different matter. I see. So then not only do you believe you have something like a soul (though you use this euphemism "sphere of mind") you believe also in the possible multiple reincarnations of your soul. Interesting. > If the brain were identical with the mind this would > not be possible: Naturally you must also believe in the duality of mind and matter, an idea left over as if as a bad hangover from Descartes and other dualists. Your beliefs above would otherwise make no sense to you. > These metaphysical musings are interesting but have no > bearing on the rigorous argument presented before, On the contrary, they must certainly do. I will tell you this in no uncertain terms: you will never understand Searle until learn to see past the sort of religious ideas you have presented above. And until you understand him, you won't understand what you need to do to refute his argument. You might start here: http://socrates.berkeley.edu/~jsearle/Consciousness1.rtf -gts From gts_2000 at yahoo.com Fri Jan 1 03:18:06 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 31 Dec 2009 19:18:06 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <722968.21666.qm@web36506.mail.mud.yahoo.com> --- On Thu, 12/31/09, Stathis Papaioannou wrote: > rigorous argument presented before, which showed that > whatever the mind is, if the function of the device generating it > is reproduced then the mind is also reproduced. You did not come up > with a rebuttal, other than to put forward your feeling that some > magic would happen to stop it being true. I stated twice during your presentation of your argument that you were speaking of a logical contradiction. I played along anyway, and from this you apparently got the idea that you had somehow presented an argument that I would find convincing. I'll put an experiment to you, and you tell me what the answer should be: "Please imagine that your brain exists as partly real and partly as an abstract formal description of its former reality, and then report your imagined subjective experience." I hope can appreciate how any reasonable person would consider that question incoherent and even ludicrous. I hope you can also see that from my point of view, you asked me that same question. -gts From scerir at libero.it Fri Jan 1 03:57:05 2010 From: scerir at libero.it (scerir) Date: Fri, 1 Jan 2010 04:57:05 +0100 (CET) Subject: [ExI] Some new angle about AI Message-ID: <10954041.186901262318225013.JavaMail.defaultUser@defaultHost> > There is a way to produce algorithmic randomness with a Turing > machine, requiring something of a trick. The Turing machine runs a > program which generates a virtual world containing an observer, and > the observer has a piece of paper with a bitstring written on it. At > regular intervals the program duplicates the entire virtual world > including the observer, but in one copy appends 1 to the bitstring and > in the other appends 0. From the point of view of the observer, > whether the next bit will be a 1 or a 0 is indeterminate, and the > bitstring he has so far truly random. This is despite the fact that > the program is completely deterministic from the perspective of an > outside observer. It will be obvious that this is a model of quantum > randomness under the MWI of QM: God does not play dice, but his > creatures do. > Stathis Papaioannou A Turing machine can compute many things. It cannot compute other things, like (in general) real numbers (because of their incompressibility). I can agree that, like in your example, a perspective from within is different from the perspective of an outsider. A God who does not play dice is well possible (even the late Dirac had that opinion) but the God who plays the ManyWorlds or the Great Programmer who computes all evolutions of all universes, and not the specific evolution of the specific universe, are lazy, IMO. From stathisp at gmail.com Fri Jan 1 04:09:07 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jan 2010 15:09:07 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <874371.98899.qm@web36502.mail.mud.yahoo.com> References: <874371.98899.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/1 Gordon Swobe : >> After I die my mind can be instantiated again multiple >> times, with different matter. > > I see. So then not only do you believe you have something like a soul (though you use this euphemism "sphere of mind") you believe also in the possible multiple reincarnations of your soul. Interesting. Even if it turns out that the brain is uncomputable, the mind can be duplicated by assembling atoms in the same configuration as the original brain. If you accept this and you describe it as transfer of a soul from one body to another, then you believe in a soul. Most scientists and rational philosophers believe this but don't call it a soul, preferring to reserve that term for a supernatural entity created by God. Indeed, those who think your mind *won't* be duplicated if your brain is duplicated at least tacitly believe in a supernatural soul. >> If the brain were identical with the mind this would >> not be possible: > > Naturally you must also believe in the duality of mind and matter, an idea left over as if as a bad hangover from Descartes and other dualists. Your beliefs above would otherwise make no sense to you. Are you a dualist regarding computer programs? On the one hand there is the physical hardware implementing the program, and on the other hand there is the abstract program itself. If that is dualism, then the term could be equally well applied to the mind/body distinction. >> These metaphysical musings are interesting but have no >> bearing on the rigorous argument presented before, > > On the contrary, they must certainly do. I will tell you this in no uncertain terms: you will never understand Searle until learn to see past the sort of religious ideas you have presented above. And until you understand him, you won't understand what you need to do to refute his argument. > > You might start here: > > http://socrates.berkeley.edu/~jsearle/Consciousness1.rtf There isn't actually anything in that paper with which I or most of the others on this list who have been arguing with you will disagree. The only serious error Searle makes is to claim that computer programs can't generate consciousness while at the same time holding that the brain can be described algorithmically. These two ideas lead to an internal inconsistency, which is the worst sort of philosophical error. -- Stathis Papaioannou From stathisp at gmail.com Fri Jan 1 05:29:32 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jan 2010 16:29:32 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <722968.21666.qm@web36506.mail.mud.yahoo.com> References: <722968.21666.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/1 Gordon Swobe : > --- On Thu, 12/31/09, Stathis Papaioannou wrote: > >> rigorous argument presented before, which showed that >> whatever the mind is, if the function of the device generating it >> is reproduced then the mind is also reproduced. You did not come up >> with a rebuttal, other than to put forward your feeling that some >> magic would happen to stop it being true. > > I stated twice during your presentation of your argument that you were speaking of a logical contradiction. I played along anyway, and from this you apparently got the idea that you had somehow presented an argument that I would find convincing. > > I'll put an experiment to you, and you tell me what the answer should be: > > "Please imagine that your brain exists as partly real and partly as an abstract formal description of its former reality, and then report your imagined subjective experience." > > I hope can appreciate how any reasonable person would consider that question incoherent and even ludicrous. I hope you can also see that from my point of view, you asked me that same question. What does "partly as an abstract formal description of its former reality" mean? It certainly could be taken as incoherent nonsense. I asked you no such thing. I asked what would happen if a surgeon installed in your brain artificial neurons which were designed so that they perform the same function as biological neurons. You agreed that it is possible to make such neurons, and you agreed that they could be installed. These are easily understandable, concrete concepts. Such procedures might even become commonplace in a few years time, as treatment for patients who have had strokes or head injuries. Naturally, the patients would be observed after the procedure and they would either behave normally and say that they felt normal, or they would not. It's perfectly straightforward, and the whole experiment from start to finish could be done by technicians with no idea about philosophy of mind. Your insistence that it's nonsense suggests that you have such a strong attachment to your position that you don't want to face any argument that you can see would challenge it. -- Stathis Papaioannou From jonkc at bellsouth.net Fri Jan 1 06:53:15 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 1 Jan 2010 01:53:15 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <853659.71219.qm@web36508.mail.mud.yahoo.com> References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: On Dec 31, 2009, Gordon Swobe wrote: > Simulated objects affect each other in the sense that mathematical abstractions affect each other OK, that's all I need! And mind is an abstraction, that's why computers will be able to produce it exactly, not simulate it, produce it. > and we can make pragmatic use of those abstractions in computer modeling. We can indeed. Pragmatic means something works, but you seem to think that the fact that something works is no reason to think it might be true; you have already demonstrated that you think ideas that don't work, such as your ideas that conflict with evolution, is no reason to think they are untrue. I disagree, I do not think that strategy leads to enlightenment. > But those objects cannot as you claimed in a previous message "burn" each other, nor can they as Stathis claimed have the property of "wetness". Simulated fire doesn't burn things and simulated waterfalls are not "wet". As you have done many many times before you make declarations but don't say why we should believe such statements and you don't even try to refute the arguments against them, you just ignore them. Well I admit that is easier. > It looks like religion to me when people here confuse computer simulations of things with real things I maintain that 3 facts are undeniable: 1) It is virtually certain that random mutation and natural selection produced life on Earth. 2) It is virtually certain that evolution can see intelligent behavior but is blind to consciousness. 3) It is absolutely certain that there is at least one conscious being on this planet. From that I conclude that intelligent behavior must produce consciousness. You say you don't understand how that could be and I don't exactly understand it either but reality is not required to be a slave to our understanding. The way science advances is that evidence amounts that something is puzzling and people start to think of ways of solving the puzzle. Your way is simply to pretend that the evidence doesn't exist, and I don't think objecting to such a philosophy is religious. John K Clark > ,especially when those simulations happen to represent intentional entities, e.g., real people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Jan 1 14:46:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 1 Jan 2010 06:46:46 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <803323.66687.qm@web36501.mail.mud.yahoo.com> --- On Thu, 12/31/09, Stathis Papaioannou wrote: > Even if it turns out that the brain is uncomputable, the > mind can be duplicated by assembling atoms in the same configuration > as the original brain. I happen to agree that we can duplicate a brain atom for atom and have the same person at the end (if I didn't then I would not identify with extropianism) but you had asserted something in a previous post suggesting that your "abstract sphere of mind" exists independently of the physical matter that comprises your brain. In my opinion you fall off the rails there and wander into the land of metaphysical dualism. > Are you a dualist regarding computer programs? No, but you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. The conventional strong AI research program is based on that same false premise, where software = mind and hardware = brain, and it won't work for exactly that reason. > The only serious error Searle makes is to claim that > computer programs can't generate consciousness while at the same > time holding that the brain can be described algorithmically. No error at all, except that you cannot or will not see past your dualist assumptions, or at least not far enough to see what Searle actually means. I had hoped that paper I referenced would bring you some clarity but I see it didn't. What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness in a s/h system that implements it any more than can a simulated thunderstorm cause wetness in that same s/h system. It makes no difference how perfectly that simulation describes the thing it simulates. If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? And why not? Because, as I keep trying to communicate to you, simulations of things do not equal the things they simulate. Descriptions of things do not equal the things they describe. -gts From stathisp at gmail.com Fri Jan 1 16:01:31 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 03:01:31 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <803323.66687.qm@web36501.mail.mud.yahoo.com> References: <803323.66687.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/2 Gordon Swobe : > I happen to agree that we can duplicate a brain atom for atom and have the same person at the end (if I didn't then I would not identify with extropianism) but you had asserted something in a previous post suggesting that your "abstract sphere of mind" exists independently of the physical matter that comprises your brain. In my opinion you fall off the rails there and wander into the land of metaphysical dualism. You destroy a person and make a copy, and you have the "same" person again even if the original has been dead a million years. The physical object doesn't survive, but the mind does; so the mind is not the same as the physical object. Whether you call this dualism or not is a matter of taste. >> Are you a dualist regarding computer programs? > > No, but you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. The conventional strong AI research program is based on that same false premise, where software = mind and hardware = brain, and it won't work for exactly that reason. I was referring to ordinary programs that aren't considered conscious. The program is not identical with the computer, since the same program can be instantiated on different hardware. If you want to call that dualism, you can. >> The only serious error Searle makes is to claim that >> computer programs can't generate consciousness while at the same >> time holding that the brain can be described algorithmically. > > No error at all, except that you cannot or will not see past your dualist assumptions, or at least not far enough to see what Searle actually means. I had hoped that paper I referenced would bring you some clarity but I see it didn't. As I said, I agree with that paper. I just think he's wrong about computers and their potential for consciousness, which in that he only alludes to in passing. > What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness in a s/h system that implements it any more than can a simulated thunderstorm cause wetness in that same s/h system. It makes no difference how perfectly that simulation describes the thing it simulates. > > If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? And why not? Because, as I keep trying to communicate to you, simulations of things do not equal the things they simulate. Descriptions of things do not equal the things they describe. You keep repeating this, but I have shown that a device which reproduces the behaviour of a biological brain will also reproduce the consciousness. The argument is robust in that it relies on no other philosophical or scientific assumptions. How the brain behaviour is reproduced is not actually part of the argument. If it turns out that the brain's behaviour can be described algorithmically, as Searle and most cognitive scientists believe, then that establishes computationalism; if not, it still establishes functionalism by another means. -- Stathis Papaioannou From nanite1018 at gmail.com Fri Jan 1 16:08:30 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Fri, 1 Jan 2010 11:08:30 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <803323.66687.qm@web36501.mail.mud.yahoo.com> References: <803323.66687.qm@web36501.mail.mud.yahoo.com> Message-ID: <9CC8BBA6-D1E8-4834-9312-D6A6C25ECA7F@GMAIL.COM> > If you expect to find consciousness in or stemming from a computer > simulation of a brain then I would suppose you might also expect to > eat a photo of a ham sandwich off a lunch menu and find that it > tastes like the ham sandwich it simulates. After all, on your logic > the simulation of the ham sandwich is implemented in the substrate > of the menu. But that piece of paper won't taste much like a ham > sandwich, now will it? And why not? Because, as I keep trying to > communicate to you, simulations of things do not equal the things > they simulate. Descriptions of things do not equal the things they > describe. > > -gts I'll just jump in to say that this is a bad analogy, at best. Consciousness is not a thing in the world that makes things happen directly. Consciousness only effects the world by giving "directing" the body to do things. If your simulation of a ham sandwich can also interact with my taste buds exactly like a ham sandwich (akin to hooking up a simulation of the brain to a body through electro-neuro connections, etc.) then fine. But a really good photo isn't a perfect simulation, and it certainly cannot interact with the world in the way a sandwich actually does. Same thing with your other analogy about thunderstorms. A simulation of a thunderstorm can't make things wet in the real world because it is in a computer. But it can make the entities in the simulation "wet". And if you had a really complex machine that could make wind and distribute water molecules and have a big screen to show a photo of what the thunderstorm would look like from the ground, well then it could make things wet. Divorcing the simulation from the world will prevent it from doing the things that the real thing would do. But if you connect it to the "real" world in a way that lets it do everything it would normally do (all the outputs from your simulation of the brain, for example, direct a body, and all the inputs from the body's senses go to the simulation), then it will do exactly what it normally does. So all you have to do is connect your simulation of a brain to a body, and it will be just like the actual brain. Joshua Job nanite1018 at gmail.com From gts_2000 at yahoo.com Fri Jan 1 16:20:13 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 1 Jan 2010 08:20:13 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <935067.36966.qm@web36503.mail.mud.yahoo.com> --- On Fri, 1/1/10, Stathis Papaioannou wrote: >> I'll put an experiment to you, and you tell me what >> the answer should be: >> >> "Please imagine that your brain exists as partly real >> and partly as an abstract formal description of its former >> reality, and then report your imagined subjective >> experience." > >> I hope can appreciate how any reasonable person would >> consider that question incoherent and even ludicrous. I hope >> you can also see that from my point of view, you asked me >> that same question. > What does "partly as an abstract formal description of its > former reality" mean? It means that programs exist as formal descriptions of real or supposed objects or processes. They describe and simulate real objects and real processes but they do not equal them. > I asked you no such thing. You did but apparently you didn't understand me well enough to realize it. > I asked what would happen if a > surgeon installed in your brain artificial neurons which were > designed so that they perform the same function as biological neurons. I have no problem with artificial neurons, per se. I have a problem with the notion that programs that simulate real objects and processes, such as those that exist in your plan for artificial neurons, can have the same sort of reality as the neurological objects and processes they simulate. They can't. You might just as well have asked me to imagine myself as imaginary, whatever that means. -gts From gts_2000 at yahoo.com Fri Jan 1 16:22:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 1 Jan 2010 08:22:21 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <232571.11555.qm@web36501.mail.mud.yahoo.com> -- On Thu, 12/31/09, Stathis Papaioannou wrote: > Even if it turns out that the brain is uncomputable, the > mind can be duplicated by assembling atoms in the same configuration > as the original brain. I happen to agree that we can duplicate a brain atom for atom and have the same person at the end (if I didn't then I would not identify with extropianism) but you had asserted something in a previous post suggesting that your "abstract sphere of mind" exists independently of the physical matter that comprises your brain. In my opinion you fall off the rails there and wander into the land of metaphysical dualism. > Are you a dualist regarding computer programs? No, but you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. The conventional strong AI research program is based on that same false premise, where software = mind and hardware = brain, and it won't work for exactly that reason. > The only serious error Searle makes is to claim that > computer programs can't generate consciousness while at the same > time holding that the brain can be described algorithmically. No error at all, except that you cannot or will not see past your dualist assumptions, or at least not far enough to see what Searle actually means. I had hoped that paper I referenced would bring you some clarity but I see it didn't. What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness in a s/h system that implements it any more than can a simulated thunderstorm cause wetness in that same s/h system. It makes no difference how perfectly that simulation describes the thing it simulates. If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? And why not? Because, as I keep trying to communicate to you, simulations of things do not equal the things they simulate. Descriptions of things do not equal the things they describe. -gts From sparge at gmail.com Fri Jan 1 16:51:20 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 1 Jan 2010 11:51:20 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/1 John Clark : > > I maintain that 3 facts are undeniable: > > 1) It is virtually certain that random mutation and natural selection > produced life on Earth. > 2) It is?virtually certain that evolution can see intelligent behavior but > is blind to consciousness. If you mean that intelligence improves a species' ability to survive then I agree. > 3) It is absolutely certain that there is at least one conscious being on > this planet. Granted, allowing for the possibility that "this planet" doesn't really exist as we think it does. > From that I conclude that?intelligent behavior must produce consciousness. OK, here you lost me. I don't see how you can say anything stronger than "intelligent behavior *can* produce consciousness". -Dave From gts_2000 at yahoo.com Fri Jan 1 17:13:23 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 1 Jan 2010 09:13:23 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <528478.52920.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/1/10, Stathis Papaioannou wrote: > You destroy a person and make a copy, and you have the > "same" person again even if the original has been dead a million years. > The physical object doesn't survive, but the mind does Okay, but you'll agree I assume that the person's intentionality goes away completely for a million years? He went away to become food for worms (or to cryo, whatever). We can rightly consider anyone during that million year period who claims his mind still exists as a loon who believes in ghosts. Yes? > > No, but you on the other hand should describe yourself > as such given that you believe we can get intentional > entities from running programs. The conventional strong AI > research program is based on that same false premise, where > software = mind and hardware = brain, and it won't work for > exactly that reason. > > I was referring to ordinary programs that aren't considered > conscious. The program is not identical with the computer, since the > same program can be instantiated on different hardware. If you want to > call that dualism, you can. But I think you would expect the same for a program that had somehow caused strong AI. That is the dualistic approach to strong AI that Searle takes issue with. For strong AI to work (as it does in humans that have the same capability) we need to re-create the substance of it (not merely the form of it as in a program) much like nature did and exactly as you did in your experiment above about recreating a copy of the brain. > As I said, I agree with that paper. I just think he's wrong > about computers and their potential for consciousness, which in > that he only alludes to in passing. I pointed you to that paper to show you his conception of consciousness/intentionality, and because if I remember correctly he also discusses the problem with duality. > > If you expect to find consciousness in or stemming > from a computer simulation of a brain then I would suppose > you might also expect to eat a photo of a ham sandwich off a > lunch menu and find that it tastes like the ham sandwich it > simulates. After all, on your logic the simulation of the > ham sandwich is implemented in the substrate of the menu. > But that piece of paper won't taste much like a ham > sandwich, now will it? And why not? Because, as I keep > trying to communicate to you, simulations of things do not > equal the things they simulate. Descriptions of things do > not equal the things they describe. > > You keep repeating this, but I have shown that a device > which reproduces the behaviour of a biological brain will also > reproduce the consciousness. You didn't show it to me. If you showed me anything, you showed me that an artificial brain that behaves like a real brain but does not have the material substance of a real brain will result in a mindless cartoon character that merely acts like he has intentionality, i.e., weak AI. You'll find it easier to see if you replace his entire brain with a formal programmatic description of it. Programs merely describe the real or supposed things that they're about. They're the depiction of food on a lunch menu, not the food itself. -gts From stefano.vaj at gmail.com Fri Jan 1 22:01:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 1 Jan 2010 23:01:50 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <945049.54403.qm@web65608.mail.ac4.yahoo.com> Message-ID: <580930c21001011401v4ea07bc4ode20d816362ec69e@mail.gmail.com> 2009/12/30 Stathis Papaioannou : > If the brain is computable it does not necessarily mean there will be > computational shortcuts in predicting human behaviour. You may just > have to simulate the human and let the program run to see what > happens. How can the brain not be computable as far as its *computations* are concerned? Because the real point of AGI is certainly not that of replicating, say, its metabolism... -- Stefano Vaj From stefano.vaj at gmail.com Fri Jan 1 22:20:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 1 Jan 2010 23:20:44 +0100 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> Message-ID: <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> 2009/12/31 John Clark : > On Dec 30, 2009, at 3:04 PM, scerir wrote: > no Turing machine can enumerate an infinity of correct bits of the > sequence produced by a quantum device > > It's worse than that, there are numbers (almost all numbers in fact) that a > Turing machine can't even come arbitrarily close to evaluating. A Quantum > Computer probably couldn't do that either but it hasn't been proven. But we can say that organic brains do much worse than both kinds of computers at mathematical problems... -- Stefano Vaj From stathisp at gmail.com Sat Jan 2 01:21:50 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 12:21:50 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <935067.36966.qm@web36503.mail.mud.yahoo.com> References: <935067.36966.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/2 Gordon Swobe : > --- On Fri, 1/1/10, Stathis Papaioannou wrote: > >>> I'll put an experiment to you, and you tell me what >>> the answer should be: >>> >>> "Please imagine that your brain exists as partly real >>> and partly as an abstract formal description of its former >>> reality, and then report your imagined subjective >>> experience." >> >>> I hope can appreciate how any reasonable person would >>> consider that question incoherent and even ludicrous. I hope >>> you can also see that from my point of view, you asked me >>> that same question. > > >> What does "partly as an abstract formal description of its >> former reality" mean? > > It means that programs exist as formal descriptions of real or supposed objects or processes. They describe and simulate real objects and real processes but they do not equal them. > >> I asked you no such thing. > > You did but apparently you didn't understand me well enough to realize it. Right, I asked you the question from the point of view of a concrete-thinking technician. This simpleton sets about building artificial neurons from parts he buys at Radio Shack without it even occurring to him that the programs these parts run are formal descriptions of real or supposed objects which simulate but do not equal the objects. When he is happy that his artificial neurons behave just like the real thing he has his friend the surgeon, also technically competent but not philosophically inclined, install them in the brain of a patient rendered aphasic after a stroke. We can add a second part to the experiment in which the technician builds another set of artificial neurons based on clockwork nanomachinery rather than digital circuits and has them installed in a second patient, the idea being that the clockwork neurons do not run formal programs. You then get to talk to the patients. Will both patients be able to speak equally well? If so, would it be right to say that one understands what he is saying and the other doesn't? Will the patient with the clockwork neurons report he feels normal while the other one reports he feels weird? Surely you should be able to observe *something*. If you coped with the Chinese Room thought experiment but you claim the one I have just described is incoherent or ridiculous then you are being intellectually dishonest. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 2 02:00:42 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 13:00:42 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001011401v4ea07bc4ode20d816362ec69e@mail.gmail.com> References: <945049.54403.qm@web65608.mail.ac4.yahoo.com> <580930c21001011401v4ea07bc4ode20d816362ec69e@mail.gmail.com> Message-ID: 2010/1/2 Stefano Vaj : > 2009/12/30 Stathis Papaioannou : >> If the brain is computable it does not necessarily mean there will be >> computational shortcuts in predicting human behaviour. You may just >> have to simulate the human and let the program run to see what >> happens. > > How can the brain not be computable as far as its *computations* are > concerned? Because the real point of AGI is certainly not that of > replicating, say, its metabolism... It's more an issue for mind uploading that for AGI. The only certain way to simulate a brain is to simulate the activity of neurons at the molecular level. Even if we look at a simple binary behaviour such as whether a neuron fires or not it will be dependent on everything that goes on inside the cell. There will probably be allowable computational shortcuts but we can't know without careful research what these shortcuts will be. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 2 02:06:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 13:06:51 +1100 Subject: [ExI] Some new angle about AI. In-Reply-To: <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> Message-ID: 2010/1/2 Stefano Vaj : > But we can say that organic brains do much worse than both kinds of > computers at mathematical problems... But organic brains do better than computers at the highest level of mathematical creativity. Interestingly, it is this rather than the ability to have feelings, produce art etc. that Roger Penrose used in his case against AI. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 2 04:12:44 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jan 2010 15:12:44 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <528478.52920.qm@web36505.mail.mud.yahoo.com> References: <528478.52920.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/2 Gordon Swobe : > You didn't show it to me. If you showed me anything, you showed me that an artificial brain that behaves like a real brain but does not have the material substance of a real brain will result in a mindless cartoon character that merely acts like he has intentionality, i.e., weak AI. > > You'll find it easier to see if you replace his entire brain with a formal programmatic description of it. Programs merely describe the real or supposed things that they're about. They're the depiction of food on a lunch menu, not the food itself. The reason I insist on the partial replacement experiment is that it shows the absurdity of your position by forcing you to consider what effect functionally identical but mindless components would have on the rest of the brain. But it seems you are so sure your position is correct that you consider any argument purporting to show otherwise as wrong by definition, even if you can't point out where the problem is. -- Stathis Papaioannou From jonkc at bellsouth.net Sat Jan 2 05:44:07 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 2 Jan 2010 00:44:07 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <803323.66687.qm@web36501.mail.mud.yahoo.com> References: <803323.66687.qm@web36501.mail.mud.yahoo.com> Message-ID: <03342575-E77C-4674-92A0-3972A4BD7FC0@bellsouth.net> On Jan 1, 2010, Gordon Swobe wrote: > In my opinion you fall off the rails there and wander into the land of metaphysical dualism. It may be dualism to say that what a thing is and what a thing does are not the same, but it's not metaphysical it's just logical. For example, saying mind is what a brain does is no more metaphysical than saying going fast is what a racing car does. > you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. Intentional means calculable, and calculable sounds to me to be something programs should be rather good at. > The conventional strong AI research program is based on that same false premise There is no such thing as strong AI research, there is just AI research. Nobody is doing Artificial Consciousness research because claiming success would be just too easy. > > If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. I haven't actually tried to do it but I don't believe that would work very well. It's just a hunch. > After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? No. > And why not? Because a ham sandwich is a noun and a photo of one is a very different noun and consciousness is not even a noun at all. > What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness So you and Searle keep telling us over and over and over again, but Gordon, my problem is that I think Charles Darwin was smarter than either one of you. And the fossil record also thinks Darwin was smarter. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Sat Jan 2 07:15:17 2010 From: scerir at libero.it (scerir) Date: Sat, 2 Jan 2010 08:15:17 +0100 (CET) Subject: [ExI] Some new angle about AI Message-ID: <8449203.201761262416517490.JavaMail.defaultUser@defaultHost> [Stefano] But we can say that organic brains do much worse than both kinds of computers at mathematical problems... [Stathis] But organic brains do better than computers at the highest level of mathematical creativity. Interestingly, it is this rather than the ability to have feelings, produce art etc. that Roger Penrose used in his case against AI. # There is an interesting quote here, about the importance intuition (or, to say it better, mathematical intuition) as opposed to the undecidability/uncomputability. "I don't see any reason why we should have less confidence in this kind of perception, i.e., in mathematical intuition, than in sense perception, which induces us to build up physical theories and to expect that future sense perceptions will agree with them and, moreover, to believe that a question not decidable now has meaning and may be decided in the future." - K.Godel, 'What is Cantor's Continuum Problem?', Philosophy of Mathematics, ed. P.Benacerraf & H. Putnam, p. 483, (year and publisher unknown). From stefano.vaj at gmail.com Sat Jan 2 10:03:55 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 2 Jan 2010 11:03:55 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <945049.54403.qm@web65608.mail.ac4.yahoo.com> <580930c21001011401v4ea07bc4ode20d816362ec69e@mail.gmail.com> Message-ID: <580930c21001020203q4800fac2s93777961471637f1@mail.gmail.com> 2010/1/2 Stathis Papaioannou : > It's more an issue for mind uploading that for AGI. The only certain > way to simulate a brain is to simulate the activity of neurons at the > molecular level. Even if we look at a simple binary behaviour such as > whether a neuron fires or not it will be dependent on everything that > goes on inside the cell. There will probably be allowable > computational shortcuts but we can't know without careful research > what these shortcuts will be. Even without shortcuts, or approximations "good enough" not to imply any perceivable behavioural modifications, an organic brain is a relatively small system with a very finite number of states. Even very big brains have some 10^20 or something molecules, and that of. say, fruitflies orders of magnitude fewer... -- Stefano Vaj From kanzure at gmail.com Sat Jan 2 14:37:23 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 2 Jan 2010 08:37:23 -0600 Subject: [ExI] Fwd: [wta-talk] Enhancement's Good, But Is It Necessary? In-Reply-To: <3d1c82451001012016w2f8d1ed1jadc8ac1d53107de8@mail.gmail.com> References: <4B3B5A4E.6020107@gmail.com> <580930c21001011418y63188b1cjaac5d747fc966b30@mail.gmail.com> <3d1c82451001012016w2f8d1ed1jadc8ac1d53107de8@mail.gmail.com> Message-ID: <55ad6af71001020637x5bccd77ey37927d74a3b91133@mail.gmail.com> ---------- Forwarded message ---------- From: Christopher Healey Date: Fri, Jan 1, 2010 at 10:16 PM Subject: Re: [wta-talk] Enhancement's Good, But Is It Necessary? To: Humanity+ Discussion List > how can you justify prioritizing transhumanism? It's much easier to scale a sheer face with the proper gear. ?We stand at the base camp of many problems that tower over us menacingly, and we might even be equipped to tackle a few of them. ?But which few? Given limited resources, which problems are the most moral to ignore? We all employ tools to effect changes within our sphere of influence; better gear can deliver more leverage to assail more challenges. Perhaps, if one does it right, to assail entire classes of challenge in one fell swoop. Transhumanism simply recognizes that as we zoom inward from our sphere of influence's farthest reaches, it doesn't stop at our skin, but continues inward to our deepest structure. ? To be responsible to our intentions of a better world, we are compelled to look not only at external, but also internal changes; if we can safely deliver these internal *choices*, how could we morally squander such leverage? ?It's all about getting there (a better world), from here. ?Safely and responsibly, of course. ?Transhumanism is a rooted sub-goal of seeking a better future. From gts_2000 at yahoo.com Sat Jan 2 14:50:35 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 06:50:35 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <688803.71262.qm@web36507.mail.mud.yahoo.com> --- On Fri, 1/1/10, Stathis Papaioannou wrote: > Right, I asked you the question from the point of view of > a concrete-thinking technician. This simpleton sets about > building artificial neurons from parts he buys at Radio Shack > without it even occurring to him that the programs these parts run > are formal descriptions of real or supposed objects which simulate but > do not equal the objects. When he is happy that his artificial > neurons behave just like the real thing he has his friend the surgeon, > also technically competent but not philosophically inclined, > install them in the brain of a patient rendered aphasic after a stroke. The surgeon replaces all those neurons relevant to correcting the patient's aphasia with a-neurons programmed and configured in such a way that the patient will pass the Turing test while appearing normal and healthy. We don't know in 2009 if this requires work in areas outside Wernicke's but we'll assume our surgeon here knows. The TT and the subject's reported symptoms represent the surgeon's only means of measuring the supposed health of his patient. > We can add a second part to the experiment in which the technician > builds another set of artificial neurons based on clockwork nanomachinery > rather than digital circuits and has them installed in a second > patient, the idea being that the clockwork neurons do not run formal > programs. A second surgeon does the same with this patient, releasing him from the hospital after he appears healthy and passes the TT. > You then get to talk to the patients. Will both patients be > able to speak equally well? Yes. > If so, would it be right to say that one understands what he is saying > and the other doesn't? Yes. On Searle's view the TT gives false positives for the first patient. > Will the patient with the clockwork neurons report he feels normal while > the other one reports he feels weird? Surely you should be able to > observe *something*. If either one appears or reports feeling abnormal, we send him back to the hospital. -gts From gts_2000 at yahoo.com Sat Jan 2 15:29:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 07:29:02 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <03342575-E77C-4674-92A0-3972A4BD7FC0@bellsouth.net> Message-ID: <393289.30312.qm@web36506.mail.mud.yahoo.com> --- On Sat, 1/2/10, John Clark wrote: >> In my opinion you fall off the rails there and wander into the land of >> metaphysical dualism. > > It may be dualism to say that what a thing is > and what a thing does are not the same, but it's not > metaphysical it's just logical. I see a metaphysical problem only when people assert that the mind exists as some sort of abstract entity (programmatic, algorithmic, whatever) distinct from the brain that actually does the work that we describe with those abstractions. If we want to say that mind exists in such abstract idealistic ways, that's fine, but now we must contend with all the problems associated with metaphysical dualism. Where does that mind exist? In the platonic realm? In the mind of god? Where? And how can idealistic entities affect the material world? And so on. I would rather not go down that road, nor would Searle, and I assume nobody here wants to go there either. > Intentional means calculable, and?calculable?sounds to me to be something > programs should be rather good at.? Good at simulating intentionality, yes. >> If you expect to find consciousness in or stemming >> from a computer simulation of a brain then I would suppose >> you might also expect to eat a photo of a ham sandwich off a >> lunch menu and find that it tastes like the ham sandwich it >> simulates. > I haven't actually tried to do it but I don't > believe that would work very well. It's just a hunch. Good hunch. > Because a ham sandwich is a noun and a photo of one is a very different > noun and consciousness is not even a noun at all. My point is that simulations only, ahem, simulate the things they simulate. The system in which we implement a simulation will not equal or contain the thing it simulates. It does not matter what we want to simulate, nor does it matter whether we use software and to implement it in hardware or photos of ham sandwiches to implement it in lunch menus. No matter what we do, simulations of real things will never equal the real things they simulate. I don't see this as an especially difficult concept to fathom, and it has nothing to do with Darwin! -gts From lcorbin at rawbw.com Sat Jan 2 16:03:27 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 02 Jan 2010 08:03:27 -0800 Subject: [ExI] Continued use of the term "intellectuals" is not productive Message-ID: <4B3F6E4F.1000307@rawbw.com> John Clark wrote: > On Dec 30, 2009, Tomasz Rola wrote in Re: [ExI] I accuse intellectuals... or else: > > > I wanted to make a list of intellectual Genghis Khans. > > > > Rafal suggested on Sun, 27 Dec 2009, > > > > > ### Maynard Keynes, John Kenneth Galbraith, Karl Marx, Friedrich > > > Engels, Noam Chomsky, almost any random sociologist since Emile > > > Durkheim, Upton Sinclair, Paul Krugman, Joseph Lincoln Steffens, > > > Albert Einstein, Jeremy Rifkin - collectively contributing to the > > > enactment of a staggering number of stupid policies, starting with > > > meat packing regulations and genetic engineering limits all the way to > > > affirmative action, social security, and the Fed. > > > [Damien added] > > > In US: the neocons are intellectuals, of a sort. They happily provided the > > > Iraq war. Dr. Leon Kass is clearly an intellectual, and he helped to ban > > > embryonic stem cell research... On a more highbrow level, if Heidegger > > > wasn't an intellectual, nobody is. > > I don't say all of these were as evil as Genghis Khan, that's asking for rather a lot, but off the top of my head here are some intellectuals that the world would probably have been better off if they'd never been born: > > Paul of Tarsus > Augustine of Hippo > Martin Luther > Jean-Paul Marat > Vladimir Lenin > Philipp Lenard For long I supposed that there just was confusion between "intellectuals" and "evil intellectuals", and this was getting added to the conviction by many that by definition an "intellectual" is a pointy-headed type leftist. To these people, and there are a lot of them, it would never occur that James Watson or Gauss was an intellectual. Paul Johnson didn't help with his book "Intellectuals", though it does contain revealing and utterly devastating biographical sketches about Jean-Jacques Rousseau : 'An Interesting Madman' Shelley, or the Heartlessness of Ideas Karl Marx : 'Howling Gigantic Curses' Henrik Ibsen: 'On the Contrary' Tolstoy: God's Elder Brother The Deep Waters of Ernest Hemingway Jean-Paul Sartre: 'A Little Ball of Fur and Ink" Edmund Wilson: A Brand from the Burning The Troubled Conscience of Victor Gollancz Lies, Damned Lies and Lillian Hellman But I would have preferred is to retain "intellectual" for someone who, well, engages in intellectual activity, and I always tried to think of myself and my friends as such. But the cause is hopeless: Sadly, the term now creates so much confusion is that the only prudent recourse is to drop it, and to just say what you mean instead. This is one of those cases where it is utterly pointless to argue about the meaning of a term, as disappointed as will be those who want to bandy it about as opprobrium. Lee From jonkc at bellsouth.net Sat Jan 2 16:21:20 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 2 Jan 2010 11:21:20 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <393289.30312.qm@web36506.mail.mud.yahoo.com> References: <393289.30312.qm@web36506.mail.mud.yahoo.com> Message-ID: On Jan 2, 2010, Gordon Swobe wrote: >> I see a metaphysical problem only when people assert that the mind exists as some sort of abstract entity (programmatic, algorithmic, whatever) distinct from the brain Fast is abstract, I can't hold fast in my hands, and fast is distinct from a racing car just as mind is not the same as a brain. What's all spooky and metaphysical about that? >> Intentional means calculable, and calculable sounds to me to be something >> programs should be rather good at. > > Good at simulating intentionality, yes. As long as the machine "simulates" intentionality with the same fidelity that it can "simulate" arithmetic or music I don't see there being the slightest problem. And if intentional means calculable or being directed to some object or goal then I can see absolutely no reason a machine couldn't do that, in fact they have been doing exactly that for years. I can only conclude that in Gordon-Speak the word "simulate" means done by a machine and it means precisely nothing more. > > My point is that simulations only, ahem, simulate the things they simulate. You have only one point, machines do simulations. I agree. > I don't see this as an especially difficult concept to fathom, and it has nothing to do with Darwin! OF COURSE IT HAS SOMETHING TO DO WITH DARWIN! But why bother, I've explained exactly why its all about Darwin about 27 times but like so many other logical holes in your theory you don't even try to refute them, you just ignore them; and then repeat the exact same tired old discredited pronouncements with no more evidence to support them than the first time round. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Jan 2 16:46:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 08:46:20 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <03342575-E77C-4674-92A0-3972A4BD7FC0@bellsouth.net> Message-ID: <39800.73598.qm@web36506.mail.mud.yahoo.com> --- On Sat, 1/2/10, John Clark wrote: > There is no such thing as?strong AI > research, there is just?AI research. Nobody is doing > Artificial Consciousness research because claiming success > would be just too easy.? Stathis and I engage in such research on this list, even as you watch and participate. > Because a ham sandwich is a noun and a photo of one is a very different > noun and consciousness is not even noun at all. My dictionary calls it a noun. Stathis argues not without reason that if we can compute the brain then computer simulations of brains should have intentionality. I argue that even if we find a way to compute the brain, it does not follow that a simulation of it would have intentionality any more than it follows that a computer simulation of a ham sandwich would taste like a ham sandwich, or that a computer simulation of a waterfall would make a computer wet. Computer simulations of things do not equal the things they simulate. I recall learning of a tribe of people in the Amazon forest or some such place that had never seen cameras. After seeing their photos for the first time, they came to fear them on the grounds that these amazing simulations of themselves captured their spirits. Not only did these naive people believe in spirits, they must also have believed that simulations of things somehow equal the things they simulate. -gts From gts_2000 at yahoo.com Sat Jan 2 17:17:38 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 09:17:38 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <955017.56632.qm@web36504.mail.mud.yahoo.com> --- On Sat, 1/2/10, John Clark wrote: >> I don't see this as an especially difficult concept to fathom, and it >> has nothing to do with Darwin! > > OF COURSE IT HAS SOMETHING TO DO WITH DARWIN!? There you go again. If you think I have an issue with Darwin then either you don't understand me or you don't understand Darwin. I happen to count myself as a big fan of evolution, including evolutionary psychology. I subscribe to Richard Dawkins' gene-centric interpretation. I have ignored your noises about this subject because usually I have a very little time on my hands and more interesting things to write about. -gts From gts_2000 at yahoo.com Sat Jan 2 18:26:06 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 2 Jan 2010 10:26:06 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: Message-ID: <68150.34111.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/1/10, Stathis Papaioannou wrote: > The only certain way to simulate a brain is to simulate the activity > of neurons at the molecular level. I agree with your general direction but I wonder how you know we needn't simulate them at the atomic or subatomic level. How do you know it's not turtles all the way down? At the end of that philosophical tunnel, the simulation of the thing finally becomes the thing it simulates. Form and matter converge. -gts From aware at awareresearch.com Sat Jan 2 18:27:32 2010 From: aware at awareresearch.com (Aware) Date: Sat, 2 Jan 2010 10:27:32 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: Offered, for your consideration, a wondrous land where its denizens share an appreciation of novelty, and discovery of the subtle but meaningful patterns underlying it all. ?You're entering a dimension not only of sight and sound, but of mind. ?This land is a refuge for the rare race known as the xNTx, and into this land you, an ISTJ or Inspector Guardian, have just stumbled. ?That's the signpost just ahead, your next stop, the Extropy list. Guardian Swobe, as an ISTJ, in your travels among rationalists, idealists, and artisans, you might do well to share this: - Jef (INTJ) From lcorbin at rawbw.com Sat Jan 2 19:22:45 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 02 Jan 2010 11:22:45 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <68150.34111.qm@web36508.mail.mud.yahoo.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> Message-ID: <4B3F9D05.40903@rawbw.com> Gordon wrote: > Stathis wrote: > >> The only certain way to simulate a brain is to simulate the activity >> of neurons at the molecular level. I assume this means at the input/output level only; that anything further would not add to the experience being had by the entity. > I agree with your general direction but I wonder how you > know we needn't simulate them at the atomic or subatomic > level. How do you know it's not turtles all the way down? Let's suppose for a moment that Gordon is right. In other words, internal mechanisms of the neuron must also be simulated. I want to step back and reexamine the reason that all of this is important, and how our reasoning about it must be founded on one axiom that is quite different from the other scientific ones. And that axiom is moral: if presented with two simulations only one of which is a true emulation, and they're both exhibiting behavior indicating extreme pain, we want to focus all relief efforts only on the one. We really do *not* care a bit about the other. (Again, good philosophy is almost always prescriptive, or entails prescriptive implications.) For those of us who are functionalists (or, in my case, almost 100% functionalists), it seems almost inconceivable that the causal components of an entity's having an experience require anything beneath the neuron level. In fact, it's very likely that the simulation of whole neuron tracks or bundles suffice. But I have no way of going forward to address Gordon's question. Logically, we have no way of knowing that in order to emulate experience, you have to simulate every single gluon, muon, quark, and electron. However, we can *never* in principle (so far as I can see) begin to answer that question, because ultimately, all we'll finally have to go on is behavior (with only a slight glance at the insides). I merely claim that if Gordon or anyone else who doubts were to live 24/7 for years with an entity that acted wholly and completely human, yet who was a known simulation at, say, the neuron level, entirely composed of transistors whose activity could be single-stepped through, then Gordon or anyone else would soon apply the compassionate axiom, and find himself or herself incapable of betraying or inflicting pain on his or her new friend anymore than upon a regular human. Lee From aware at awareresearch.com Sat Jan 2 20:09:13 2010 From: aware at awareresearch.com (Aware) Date: Sat, 2 Jan 2010 12:09:13 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <4B3F9D05.40903@rawbw.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> Message-ID: On Sat, Jan 2, 2010 at 11:22 AM, Lee Corbin wrote: > Let's suppose for a moment that Gordon is right. In other > words, internal mechanisms of the neuron must also be > simulated. Argh,"turtles all the way down", indeed. Then must nature also compute the infinite expansion of the digits of pi for every soap bubble as well? > I want to step back and reexamine the reason that all of this > is important, and how our reasoning about it must be founded > on one axiom that is quite different from the other scientific > ones. > > And that axiom is moral: if presented with two simulations > only one of which is a true emulation, and they're both > exhibiting behavior indicating extreme pain, we want to > focus all relief efforts only on the one. We really do > *not* care a bit about the other. This way too leads to contradiction, for example in the case of a person tortured, then with memory erased, within a black box. The morality of any act depends not on the **subjective** state of another, which by definition one could never know, but on our assessment of the rightness, in principle, of the action, in terms of our values. > For those of us who are functionalists (or, in my case, almost > 100% functionalists), it seems almost inconceivable that the causal > components of an entity's having an experience require anything > beneath the neuron level. In fact, it's very likely that the > simulation of whole neuron tracks or bundles suffice. Let go of the assumption of an **essential** consciousness, and you'll see that your functionalist perspective is entirely correct, but it needs only the level of detail, within context, to evoke the appropriate responses of the observer. To paraphrase John Clark, "swiftness" is not in the essence of a car, and the closer one looks the less apt one is to find it. Furthermore (and I realize that John didn't say /this/), a car displays "swiftness" only within an appropriate context. But key is understanding is that this "swiftness" (separate from formal descriptions of rotational velocity, power, torque, etc.) is a function of the observer. > But I have no way of going forward to address Gordon's > question. Logically, we have no way of knowing (and this is an example where logic fails but reason still prevails) > that in > order to emulate experience, you have to simulate every > single gluon, muon, quark, and ?electron. However, we > can *never* in principle (so far as I can see) begin to > answer that question, because ultimately, all we'll > finally have to go on is behavior (with only a slight > glance at the insides). > I merely claim that if Gordon or anyone else who doubts > were to live 24/7 for years with an entity that acted > wholly and completely human, yet who was a known simulation > at, say, the neuron level, entirely composed of transistors > whose activity could be single-stepped through, then Gordon > or anyone else would soon apply the compassionate axiom, > and find himself or herself incapable of betraying or > inflicting pain on his or her new friend anymore than > upon a regular human. And here, despite a ripple (more accurately a fold, or non-monotonicity) and a veering off to infinity on one side of your map of reality, you and I can agree on your conclusion. Happy New Year, Lee. - Jef From jonkc at bellsouth.net Sat Jan 2 21:11:02 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 2 Jan 2010 16:11:02 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <39800.73598.qm@web36506.mail.mud.yahoo.com> References: <39800.73598.qm@web36506.mail.mud.yahoo.com> Message-ID: <767A65B2-6F57-48F1-88F1-EB4D003E3F66@bellsouth.net> On Jan 2, 2010, at 11:46 AM, Gordon Swobe wrote: > My dictionary calls it [consciousness] a noun. Yes and dictionaries also call "I" a pronoun, and we know how much confusion that colossal error has given the world. Lexicographers make very poor philosophers. > Stathis argues not without reason that if we can compute the brain then computer simulations of brains should have intentionality. Punch card readers from the 1950's had intentionality, at least that's what your lexicographers think, the machine could do things that were calculable and could be directed to a goal. And I remind you that it was you not me that insisted on using the word intentionality rather than consciousness; I suppose you thought it sounded cooler. > I argue that even if we find a way to compute the brain, it does not follow that a simulation of it would have intentionality You haven't argued anything. An argument isn't just contradiction, an argument is a connected series of statements intended to establish a proposition. You may object to this meaning but I really must insist that argument is an intellectual process. Contradiction is just the automatic gainsaying of any statement. Look, if I argue with you, I must take up a contrary position. Yes, but that's not just saying 'No it isn't. Yes it is! No it isn't! Yes it is! I'm sorry, but your time is up and I'm not allowed to argue anymore. I want to thank Professor Python for the invaluable help he gave mein writing this post. John K Clark > any more than it follows that a computer simulation of a ham sandwich would taste like a ham sandwich, or that a computer simulation of a waterfall would make a computer wet. Computer simulations of things do not equal the things they simulate. > > I recall learning of a tribe of people in the Amazon forest or some such place that had never seen cameras. After seeing their photos for the first time, they came to fear them on the grounds that these amazing simulations of themselves captured their spirits. Not only did these naive people believe in spirits, they must also have believed that simulations of things somehow equal the things they simulate. > > -gts > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 2 21:38:14 2010 From: spike66 at att.net (spike) Date: Sat, 2 Jan 2010 13:38:14 -0800 Subject: [ExI] quiz for the new year Message-ID: >Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? >Yes No Cannot be determined The answer is YES of course. Regardless of Anne's marrital status, either way a married person is looking at, perhaps gazing fondly and lustily, upon an unmarried person. I supplied the adverbs, but the quiz came from the excellent article below on irrationality in smart people: http://www.magazine.utoronto.ca/feature/why-people-are-irrational-kurt-klein er/ How many answered it correctly? How many of you horndogs are like me, pondering the comely figure of Anne, instead of concentrating your intelligence on being rational? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 2 21:22:17 2010 From: spike66 at att.net (spike) Date: Sat, 2 Jan 2010 13:22:17 -0800 Subject: [ExI] quiz for the new year Message-ID: <7E6C40DDF6AD4E8D854E951388AEFF0C@spike> Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? Yes No Cannot be determined -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 2 22:16:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 2 Jan 2010 17:16:56 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <955017.56632.qm@web36504.mail.mud.yahoo.com> References: <955017.56632.qm@web36504.mail.mud.yahoo.com> Message-ID: On Jan 2, 2010, Gordon Swobe wrote: > If you think I have an issue with Darwin then either you don't understand me or you don't understand Darwin. I'm sure emotionally you side with Darwin, but you haven't pondered his ideas in any depth because if you had you'd know that the idea that consciousness and intelligence can be separated and are caused by processes that have nothing to do with each other is 100% contradictory to Darwin's insight. Foolish creationists who don't understand Darwin like to say that life couldn't have come about by chance alone and they're right, it couldn't come about by chance. Darwin in what was probably the single best idea any member of our species ever had came up with a way to explain not only how life came to be but also how intelligence did. However if consciousness and intelligence were not just 2 sides of the same thing but as you believe entirely separate phenomenon then science has no explanation of how consciousness came to be on this small blue planet. And yet somehow consciousness did come to be, I am conscious and its not entirely outside the laws of possibility that you are too. You ask us to believe that besides the process that produced life and intelligence working in parallel with that and entirely at random a different... something... created consciousness. For the first time in my life I know what a creationist who doesn't understand Darwin feels like. THE ENTIRE THING IS JUST BRAIN DEAD DUMB. > I have ignored your noises about this subject because usually I have a very little time on my hands > and more interesting things to write about. Wow! I sure wish I knew where you wrote those more interesting things, things more interesting than life or intelligence. No doubt you won't respond to any of my points because you're too busy explaining why a computer made of beer cans and toilet paper couldn't be conscious no matter how brilliantly it behaved because it just couldn't; and besides an intelligent beer can would be strange and strange things can't happen. Time management in action. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jan 3 01:48:13 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jan 2010 12:48:13 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <68150.34111.qm@web36508.mail.mud.yahoo.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/3 Gordon Swobe : > --- On Fri, 1/1/10, Stathis Papaioannou wrote: > >> The only certain way to simulate a brain is to simulate the activity >> of neurons at the molecular level. > > I agree with your general direction but I wonder how you know we needn't simulate them at the atomic or subatomic level. How do you know it's not turtles all the way down? Of course, the behaviour of molecules reduces to the behaviour of atoms and subatomic particles but the models of computational chemistry should take this into account. We know from experiments that some shortcuts are allowed: for example, radiolabeled biologically active molecules seem to behave normally, indicating that we don't always need to take into account what goes on at the nuclear level. I'm sure there will be other shortcuts allowing modelling above the molecular level, but what these shortcuts will be will require experiment, comparing the model with the real thing and seeing if they match. > At the end of that philosophical tunnel, the simulation of the thing finally becomes the thing it simulates. Form and matter converge. A computer simulation, however faithful, will not be identical to the real thing, as you have correctly pointed out before. However, this does not mean that a simulation cannot perform a function of the real thing. A simulated clock can tell time as well as an analogue clock. In fact, we don't use the term "simulated clock": we say that there are analogue clocks and digital clocks, and both clocks tell time. Similarly, a simulated brain is not identical with a biological brain, but it might perform the same function as a biological brain. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 3 03:08:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jan 2010 14:08:47 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <4B3F9D05.40903@rawbw.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> Message-ID: 2010/1/3 Lee Corbin : Good to hear from you again, Lee. > For those of us who are functionalists (or, in my case, almost > 100% functionalists), it seems almost inconceivable that the causal > components of an entity's having an experience require anything > beneath the neuron level. In fact, it's very likely that the > simulation of whole neuron tracks or bundles suffice. The only reason to simulate the internal processes of a neuron is that you can't otherwise be sure what it's going to do. For example, the neuron may have decided, in response to past events and because of the type of neuron it is, that it is going to increase production of dopamine receptors, decrease production of MAO and increase production of COMT (both enzymes that break down dopamine and other catecholamines). This is going to change the neuron's sensitivity to dopamine in a complex way, and therefore the neuron's behaviour, and therefore the whole brain's behaviour. In your model of the neuron you need a "sensitivity to dopamine" function which takes as variables the neuron's present state and all the inputs acting on it. If you can figure out what this function is by treating the neuron as a black box then, implicitly, you have modelled its internal processes even though you might not know what dopamine receptors, COMT or MAO are. However, it might be easier to get this function if you model the internal processes explicitly. I could go further and say that it isn't necessary even to simulate the behaviour of a neuron in order to simulate the brain. You could use cubic millimetres of brain tissue as the basic unit, ignoring natural biological boundaries such as cell membranes. If you can predict the cube's outputs in response to inputs, you can predict the behaviour of the whole brain. But for practical reasons, it would be easier to do the modelling at least at the cellular level. > But I have no way of going forward to address Gordon's > question. Logically, we have no way of knowing that in > order to emulate experience, you have to simulate every > single gluon, muon, quark, and ?electron. However, we > can *never* in principle (so far as I can see) begin to > answer that question, because ultimately, all we'll > finally have to go on is behavior (with only a slight > glance at the insides). I think the argument from partial brain replacement that I have put forward to Gordon shows that if you can reproduce the behaviour of the brain, then you necessarily also reproduce the consciousness. Simulating neurons and molecules is just a means to this end. -- Stathis Papaioannou From jonkc at bellsouth.net Sun Jan 3 06:40:25 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 01:40:25 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: On Jan 1, 2010, Dave Sill wrote: >> From that I conclude that intelligent behavior must produce consciousness. > > OK, here you lost me. I don't see how you can say anything stronger > than "intelligent behavior *can* produce consciousness". If consciousness were not linked with intelligence it would not exist on this planet. Even if consciousness happened by pure chance it wouldn't last because it would have zero survival value, it would fade away through genetic drift just as the eyes of cave creatures disappear because they are a completely useless aid to survival. In spite of all this right now, a half a billion years after evolution invented brains, I am conscious and you may be too. I can only conclude that consciousness is a byproduct of intelligence, it is the way data feels when it is being processed. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Jan 3 07:03:02 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 02:03:02 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> Message-ID: On Jan 1, 2010, Stathis Papaioannou wrote: > organic brains do better than computers at the highest level of > mathematical creativity. Creativity is a moving target, as soon as a computer can do something pundits tell us that the thing in question wasn't really creative after all. > it is this rather than the ability to have feelings, produce art etc. that Roger Penrose used in > his case against AI. The thing I don't understand is that if the human brain makes use of quantum mechanical principles to work its magic why can't we factor numbers better than computers? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jan 3 09:35:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jan 2010 20:35:08 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <688803.71262.qm@web36507.mail.mud.yahoo.com> References: <688803.71262.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/1/3 Gordon Swobe : > --- On Fri, 1/1/10, Stathis Papaioannou wrote: > >> Right, I asked you the question from the point of view of >> a concrete-thinking technician. This simpleton sets about >> building artificial neurons from parts he buys at Radio Shack >> without it even occurring to him that the programs these parts run >> are formal descriptions of real or supposed objects which simulate but >> do not equal the objects. When he is happy that his artificial >> neurons behave just like the real thing he has his friend the surgeon, >> also technically competent but not philosophically inclined, >> install them in the brain of a patient rendered aphasic after a stroke. > > The surgeon replaces all those neurons relevant to correcting the patient's aphasia with a-neurons programmed and configured in such a way that the patient will pass the Turing test while appearing normal and healthy. We don't know in 2009 if this requires work in areas outside Wernicke's but we'll assume our surgeon here knows. > > The TT and the subject's reported symptoms represent the surgeon's only means of measuring the supposed health of his patient. > >> We can add a second part to the experiment in which the technician >> builds another set of artificial neurons based on clockwork nanomachinery >> rather than digital circuits and has them installed in a second >> patient, the idea being that the clockwork neurons do not run formal >> programs. > > A second surgeon does the same with this patient, releasing him from the hospital after he appears healthy and passes the TT. > >> You then get to talk to the patients. Will both patients be >> able to speak equally well? > > Yes. > >> If so, would it be right to say that one understands what he is saying >> and the other doesn't? > > Yes. On Searle's view the TT gives false positives for the first patient. > >> Will the patient with the clockwork neurons report he feels normal while >> the other one reports he feels weird? Surely you should be able to >> observe *something*. > > If either one appears or reports feeling abnormal, we send him back to the hospital. Thank-you for clearly answering the question. Now some problems. Firstly, I understand that you have no philosophical objection to the idea that the clockwork neurons *could* have consciousness, but you don't think that they *must* have consciousness, since you don't (to this point) believe as I do that behaving like normal neurons is sufficient for this conclusion. Is that right? Moreover, if consciousness is linked to substrate rather than function then it is possible that the clockwork neurons are conscious but with a different type of consciousness. Secondly, suppose we agree that clockwork neurons can give rise to consciousness. What would happen if they looked like conventional clockwork at one level but at higher resolution we could see that they were driven by digital circuits, like the digital mechanism driving most modern clocks with analogue displays? That is, would the low level computations going on in these neurons be enough to change or eliminate their consciousness? Finally, the most important point. The patient with the computerised neurons behaves normally and says he feels normal. Moreover, he actually believes he feels normal and that he understands everything said to him, since otherwise he would tell us something is wrong. He processes the verbal information processed in the artificial part of his brain (Wernicke's area) and passed to the rest of his brain normally: for example, if you describe a scene he can draw a picture of it, if you tell him something amusing he will laugh, and if you describe a complex problem he will think about it and propose a solution. But despite this, he will understand nothing, and will simply have the delusional belief that he has normal understanding. Or in the case with the clockwork neurons, he may have an alien type of understanding, but again behave normally and have the delusional belief that his understanding is normal. That a person could be a zombie and not know it is logically possible, since a zombie by definition doesn't know anything; but that a person could be a partial zombie and be systematically unaware of this even with the non-zombified part of his brain seems to me incoherent. How do you know that you're not a partial zombie now, unable to understand anything you are reading? What reason is there to prefer normal neurons to computerised zombie neurons given that neither you nor anyone else can ever notice a difference? This is how far you have to go in order to maintain the belief that neural function and consciousness can be separated. So why not accept the simpler, logically consistent and scientifically plausible explanation that is functionalism? I suppose at this point you might return to the original claim, that semantics cannot be derived from syntax, and argue that it is strong enough to justify even such weirdness as partial zombies. But this isn't the case. I actually believe that semantics can *only* come from syntax, but if it can't, your fallback is that semantics comes from the physical activity inside brains. Thus, even accepting Searle's argument, there is no *logical* reason why semantics could not derive from other physical activity, such as the physical activity in a computer implementing a program. -- Stathis Papaioannou From dharris at livelib.com Sun Jan 3 10:47:21 2010 From: dharris at livelib.com (David C. Harris) Date: Sun, 03 Jan 2010 02:47:21 -0800 Subject: [ExI] quiz for the new year In-Reply-To: References: Message-ID: <4B4075B9.2070902@livelib.com> spike wrote: > >Jack is looking at Anne, but Anne is looking at George. Jack is > married but George is not. Is a married person looking at an unmarried > person? > > >Yes No Cannot be determined > > ... > ... excellent article below on irrationality in smart people: > > http://www.magazine.utoronto.ca/feature/why-people-are-irrational-kurt-kleiner/ > > How many answered it correctly? > > How many of you horndogs are like me, pondering the comely figure of > Anne, instead of concentrating your intelligence on being rational? > > spike Excellent article indeed! I initially answered wrong, then analyzed the T/F cases for Anne's marriedness when I read your endorsement of YES and then wondered "now why did I do that wrong and feel so confident?" I first noticed a discrepancy between normal logic and my ability to detect correct answers when I took touch typing, after I became enamored with the potential of computer keyboards. My fingers responded to characters I saw WITHOUT MY MIND being aware of choice to use a particular finger and motion. Another discrepancy occurred with the Miller Analogy Test, a test of verbal analogies. I do those VERY well, scoring in the top 1% of the highest reference group (psychiatric trainees). But I noticed that I was picking the right answers without knowing explicitly why I chose those answers. This felt like another "bypass" of explicit personal control. I'm particularly interested in what the article calls "mindware", which probably overlaps with mathematics: representations and methods of processing that lead us to better answers. I've benefited greatly from using Venn diagrams and from checking the units in science calculations (e.g. there is confusion in some of the global warming discussions when people use "kiloWatts" as if it meant "kiloWatt hours"). With this little puzzle, what would be a good mindware tool to use? I built a graph of the "looks at" relationships, but didn't realize I'd need to examine the two values of "married" for Anne. - David Harris, Palo Alto From stathisp at gmail.com Sun Jan 3 11:41:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jan 2010 22:41:19 +1100 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/3 John Clark : > If consciousness were not linked with intelligence it would not exist on > this planet. Even if consciousness happened by pure chance it wouldn't last > because it would have zero survival value, it would fade away through > genetic drift just as the eyes of cave creatures disappear because they are > a completely useless aid to survival. In spite of all this right now, a half > a billion years after evolution invented brains, I am conscious and you may > be too. I can only conclude that consciousness is a byproduct of > intelligence, it is the way data feels when it is being processed. This is not true if it is impossible to create intelligent behaviour without consciousness using biochemistry, but possible using electronics, which evolution had no access to. I point this out only for the sake of logical completeness, not because I think it is plausible. -- Stathis Papaioannou From bbenzai at yahoo.com Sun Jan 3 15:36:16 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 3 Jan 2010 07:36:16 -0800 (PST) Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: Message-ID: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Aware wrote: > > Offered, for your consideration, a wondrous land where its > denizens > share an appreciation of novelty, and discovery of the > subtle but > meaningful patterns underlying it all. ?You're entering a > dimension > not only of sight and sound, but of mind. ?This land is a > refuge for > the rare race known as the xNTx, and into this land you, an > ISTJ or > Inspector Guardian, have just stumbled. ?That's the > signpost just > ahead, your next stop, the Extropy list. > > Guardian Swobe, as an ISTJ, in your travels among > rationalists, idealists, and > artisans, you might do well to share this: > > > - Jef (INTJ) Hm. I'm not at all convinced by these personality tests. Every time I've tried a Myers-Briggs test (being just as vain as everyone else), I've got a different result. So far, I'm INTP, INFJ, and INFP, so rather than an xNTx, I seem to be a INxx. Does that mean anything? I'm starting to doubt it. Or maybe it would mean something, if someone could come up with a good set of questions. Usually, at least a few of the questions are silly or unanswerable, so you just have to pick one without worrying too much about it. Also, the summaries remind me more of a horoscope than anything. Why do I never read anything bad about myself? That's suspicious, I'm not so vain as to think I don't have bad points. Haven't been able to try the Kiersey test, the guy seems a bit precious about it, and makes people take down sites that offer free versions of it. Which is reason enough to dismiss it, imo. The distinctions seem a bit silly, too. e.g. N/S: you can either be Insrospective OR Observant. ??? What if you are an observant introspective person? It all seems an attempt to force people into categories that are too rigidly defined (where's the category for anti-authoritarian contrarians?). Ben Zaiboc (IENSFTPJ) From gts_2000 at yahoo.com Sun Jan 3 16:20:14 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 08:20:14 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <867961.71243.qm@web36504.mail.mud.yahoo.com> --- On Sun, 1/3/10, Stathis Papaioannou wrote: > Thank-you for clearly answering the question. Welcome. suggested abbreviations and conventions: m-neurons = material ("clockwork") artificial neurons p-neurons = programmatic artificial neurons Sam = the patient with the m-neurons Cram = the patient with the p-neurons (CRA-man) (If Sam and Cram look familiar it's because I used these names in a similar thought experiment of my own design.) > Firstly, I understand that you have no philosophical > objection to the idea that the clockwork neurons *could* have > consciousness, but you don't think that they *must* have consciousness, > since you don't (to this point) believe as I do that behaving like normal > neurons is sufficient for this conclusion. Is that right? No, because I reject epiphenomenalism I think Sam cannot pass the TT without genuine intentionality. If Sam's m-neurons fail to result in a passing TT score for Sam then we have no choice but to take his m-neurons back to the store and demand a refund. > Moreover, if consciousness is linked to substrate rather than function > then it is possible that the clockwork neurons are conscious but with > a different type of consciousness. If Sam passes the TT and reports normal subjective experiences from m-neurons then I will consider him cured. I have no concerns about "type" of consciousness. > Secondly, suppose we agree that clockwork neurons can give > rise to consciousness. What would happen if they looked like > conventional clockwork at one level but at higher resolution we could > see that they were driven by digital circuits, like the digital mechanism > driving most modern clocks with analogue displays? That is, would > the low level computations going on in these neurons be enough to > change or eliminate their consciousness? Yes. In that case the salesperson deceived us. He sold us p-neurons in a box labeled m-neurons. And if we cannot detect the digital nature of these neurons from careful physical inspection and must instead conceive of some digital platonic realm that drives or causes material objects then you will have introduced into our experiment the quasi-religious philosophical idea of substance or property dualism. > Finally, the most important point. The patient with the computerised > neurons behaves normally and says he feels normal. Yes. > Moreover, he actually believes he feels normal and that he understands > everything said to him, since otherwise he would tell us something is > wrong. No, he does not "actually" believe anything. He merely reports that he feels normal and reports that he understands. His surgeon programmed all p-neurons such that he would pass the TT and report healthy intentionality, including but not limited to p-neurons in Wernicke's area. > He processes the verbal information processed in the artificial part of > his brain (Wernicke's area) and passed to the rest of his brain > normally: for example, if you describe a scene he can draw > a picture of it, if you tell him something amusing he will laugh, and > if you describe a complex problem he will think about it and > propose a solution. But despite this, he will understand nothing, and > will simply have the delusional belief... He will have no conscious beliefs delusional or otherwise. > That a person could be a zombie and not know it is > logically possible, since a zombie by definition doesn't know anything; > but that a person could be a partial zombie and be systematically > unaware of this even with the non-zombified part of his brain seems to me > incoherent. I see nothing incoherent about it except when you ask me to imagine the unimaginable as you did your last thought experiment. In effect, the relevant parts of Cram's brain act like a computer, or mesh of computers, that run programs. That computer network receives symbolic inputs and generates symbolic outputs. Cram passes the TT yet he has no grasp of the meanings of the symbols his computerized brain manipulates. And if the surgeon programmed the p-neurons correctly then those parts of Cram's brain associated with "reporting subjective feelings" will run programs that ensure Cram will talk very much like Sam. We cannot distinguish Cram from Sam except with philosophical arguments. If we can then one patient or the other has not overcome his illness. One surgeon or the other failed to do his job. > How do you know that you're not a partial zombie now, unable to > understand anything you are reading? I know because I do understand your words and I know I do, (contrast this with your last experiment in which I could not even say with certainty that I existed, much less that I could understand anything). > What reason is there to prefer normal neurons to computerised zombie > neurons given that neither you nor anyone else can ever notice a > difference? I notice the difference and I prefer existence. > This is how far you have to go in order to maintain the belief that > neural function and consciousness can be separated. So why not accept the > simpler, logically consistent and scientifically plausible > explanation that is functionalism? You assume here that I have followed your argument. > I actually believe that semantics can *only* come from syntax, As a programmer of syntax I want to believe that too. Hasn't happened. :) -gts From jonkc at bellsouth.net Sun Jan 3 16:39:21 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 11:39:21 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <853659.71219.qm@web36508.mail.mud.yahoo.com> Message-ID: On Jan 3, 2010, Stathis Papaioannou wrote: > This is not true if it is impossible to create intelligent behaviour > without consciousness using biochemistry, but possible using > electronics, which evolution had no access to. I point this out only > for the sake of logical completeness, not because I think it is > plausible. Even in that case it would indicate that it would be easier to make a conscious intelligence than a unconscious one, so it would seem wise that when you encounter intelligence your default assumption should be that consciousness is behind it. Searle assumes the opposite, he assumes unconsciousness regardless of how brilliant an intelligence may be unless consciousness is proven; the catch-22 of course is that consciousness can never be proven. Also, if it's the biochemistry inside the neuron that mysteriously generates consciousness and not the signals between neurons that a computer could simulate then each neuron is on its own as far as consciousness is concerned. One neuron would be sufficient to produce consciousness, it would have to be because they can't work together on this project. If you allow one neuron to have consciousness even though it has no intelligence it would be a very small step to insist that rocks which are no dumber than neurons have it too. So now we have intelligence without consciousness and consciousness without intelligence and rocks with feelings; that is not a position I'd be comfortable defending. As I said before creationists correctly say that life and intelligence are too grand to have come about by chance, but Searle says that's exactly how biology came up with consciousness. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 3 16:14:11 2010 From: spike66 at att.net (spike) Date: Sun, 3 Jan 2010 08:14:11 -0800 Subject: [ExI] quiz for the new year In-Reply-To: <4B4075B9.2070902@livelib.com> References: <4B4075B9.2070902@livelib.com> Message-ID: <3D018AC14F854C5F824D2203728E6109@spike> > ...On Behalf Of David C. Harris. > > ... > > ... excellent article below on irrationality in smart people: > > http://www.magazine.utoronto.ca/feature/why-people-are-irrational-kurt-klein er/ ... > > spike > Excellent article indeed! I initially answered wrong... Thanks, me too. > I first noticed a discrepancy between normal logic and my > ability to detect correct answers when I took touch typing, > after I became enamored with the potential of computer > keyboards... WOW you must be nearly as old as I am. Modern people don't take touch typing, rather they seem to be born knowing the QWERTY arrangement. My son is 3.5 and is already demonstrating some proficiency. > My fingers responded to characters I saw WITHOUT > MY MIND being aware of choice to use a particular finger and motion... Same here. When one can think thru a keyboard, one's writing becomes so much less labored and in many cases filled with silliness, as I have demonstrated in this forum. > ... > (e.g. there is confusion in some of the global warming > discussions when people use "kiloWatts" as if it meant > "kiloWatt hours")... Ja, it is an after-effect of our government confusing the units of jobs created/saved when it meant job-hours created or saved: http://origins.recovery.gov/Pages/home.aspx > ...I built a graph of the "looks at" relationships, but > didn't realize I'd need to examine the two values of > "married" for Anne. - David Harris, Palo Alto The name Anne has been forever associated in my mind with the dazzling Anne Hathaway: http://images.google.com/images?hl=en&source=hp&q=anne+hathaway&rlz=1W1GGLL_ en&um=1&ie=UTF-8&ei=MMBAS6KXDoGmsgOek5zLBA&sa=X&oi=image_result_group&ct=tit le&resnum=1&ved=0CB0QsAQwAA David, I see you are from Palo Alto. We should gather the local ExI-chatters for sushi or something. Regarding the point of your post, the curious discrepancy between intelligence and rationality, it is something I have observed and pondered for some time. Our IQ tests do nothing to measure or indicate the level of rationality. I would be interested in figuring out a way to create an RQ test. Any ideas? spike From gts_2000 at yahoo.com Sun Jan 3 17:39:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 09:39:33 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <348177.73638.qm@web36506.mail.mud.yahoo.com> --- On Sat, 1/2/10, John Clark wrote: >> If you think I have an issue with Darwin then either you don't >> understand me or you don't understand Darwin. > I'm sure emotionally you side with Darwin, but > you haven't pondered his ideas in any depth because if > you had you'd know that the idea that consciousness and > intelligence can be separated and are caused by processes > that have nothing to do with each other is 100% > contradictory to Darwin's insight. You misunderstand me if you I think I believe consciousness and intelligence exist "separately" in humans or other animals. For most purposes we can consider them near synonyms or at least as handmaidens. The distinction does however become important in the context of strong AI research. Symbol grounding requires the sort of subjective first-person perspective that evolved in these machines we call humans, and which probably also evolved in other species. If we can duplicate it in software/hardware systems then they can have strong AI. Not really a complicated idea. -gts From jonkc at bellsouth.net Sun Jan 3 18:37:50 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 13:37:50 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <348177.73638.qm@web36506.mail.mud.yahoo.com> References: <348177.73638.qm@web36506.mail.mud.yahoo.com> Message-ID: <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> On Jan 3, 2010, at 12:39 PM, Gordon Swobe wrote: > You misunderstand me if you I think I believe consciousness and intelligence exist "separately" in humans or other animals. For most purposes we can consider them near synonyms or at least as handmaidens. Well that's a start. > The distinction does however become important in the context of strong AI research. Symbol grounding requires the sort of subjective first-person perspective that evolved in these machines we call humans The operative word in the above is "evolved". Why did this mysterious "subjective symbol grounding" (bafflegab translation: consciousness) evolve? Not only can't you explain how this thing is supposed to work you can't explain how it came to be. Certainly Darwin would be no help as it would have absolutely no effect on behavior, in fact that is precisely why you think the Turing Test doesn't work. And even if it came about by pure chance it wouldn't last, in fact it would be detrimental as the resources used to generate consciousness could better be used for things that actually did something, like help get genes into the next generation. And yet consciousness exists? Why? We don't know a lot about consciousness but one of the few things we do know is that Darwin is screaming that intelligence and consciousness are two sides of the same coin. John K Clark > , and which probably also evolved in other species. If we can duplicate it in software/hardware systems then they can have strong AI. Not really a complicated idea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Jan 3 18:38:27 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 10:38:27 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <155704.37523.qm@web36508.mail.mud.yahoo.com> --- On Sun, 1/3/10, John Clark wrote: > As I said before creationists correctly say that > life and intelligence are too grand to have come about by > chance, but Searle says that's exactly how biology came > up with consciousness. Simply not so, John. Consciousness evolved as an adaptive trait to aid intelligence. An amoeba has intelligence in so much as it responds in intelligent ways to its environment, for example in ways that help it find nourishment, but it does not appear to have much consciousness assuming it has any at all. Having no nervous system, it appears to have only what we might call instinctive or unconscious intelligence. Not unlike a computer. Higher organisms like us have intelligence enhanced with consciousness. They can ground symbols do other things that many would like to see computers do. -gts From thespike at satx.rr.com Sun Jan 3 18:45:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 03 Jan 2010 12:45:17 -0600 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> References: <348177.73638.qm@web36506.mail.mud.yahoo.com> <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> Message-ID: <4B40E5BD.1050700@satx.rr.com> On 1/3/2010 12:37 PM, John Clark wrote: > intelligence and consciousness are two sides of the same coin. two sides of the same koan From jonkc at bellsouth.net Sun Jan 3 18:45:26 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 3 Jan 2010 13:45:26 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <155704.37523.qm@web36508.mail.mud.yahoo.com> References: <155704.37523.qm@web36508.mail.mud.yahoo.com> Message-ID: <9350929F-5078-4AB7-89A3-C18A1F625929@bellsouth.net> On Jan 3, 2010, Gordon Swobe wrote: > Consciousness evolved as an adaptive trait to aid intelligence. Then the Turing Test works. You can't have it both ways! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 3 18:46:47 2010 From: spike66 at att.net (spike) Date: Sun, 3 Jan 2010 10:46:47 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <501890.28524.qm@web113619.mail.gq1.yahoo.com> References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Message-ID: > ---...On Behalf Of Ben Zaiboc > ... > > Hm. > > ...Every > time I've tried a Myers-Briggs test (being just as vain as > everyone else), I've got a different result. So far, I'm > INTP, INFJ, and INFP, so rather than an xNTx, I seem to be a > INxx. Does that mean anything?... Multiple personality disorder? {8^D One's score to a large extend does depend on one's mood at the moment, but it is just a game. It would be entertaining to try to create a pool of each of the 16 segments, then try to derive questions that would consistently identify each. I come out pretty consistently INTP or ENTP, depending on my mood. > > ... > > The distinctions seem a bit silly, too. e.g. N/S: you can > either be Insrospective OR Observant. ??? What if you are > an observant introspective person? > > It all seems an attempt to force people into categories that > are too rigidly defined (where's the category for > anti-authoritarian contrarians?). > > Ben Zaiboc > (IENSFTPJ) Again, it is just a game, invented my sociologist types. Notice also a similarity with horoscopes: the description of each category is very general and at least moderately flattering. An explanation of the popularity of the game might be that everyone is pleased with the description they see. It would be cool to try to design a four-bit identifier game designed by engineers and scientists. Secondly, as a little joke, make the description of each category a biting criticism, such as the internet gag-horoscopes that went around a few years ago, where the horoscopes started out with the usual mush, but progressed toward ending comments such as "those who know you well consider you an arrogant asshole." {8^D Does anyone here remember that game? spike From gts_2000 at yahoo.com Sun Jan 3 19:42:19 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 11:42:19 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> Message-ID: <392806.41976.qm@web36507.mail.mud.yahoo.com> --- On Sun, 1/3/10, John Clark wrote: > The operative word in the above is "evolved". Why did this mysterious > "subjective symbol grounding" (bafflegab translation: consciousness) > evolve? To help you communicate better with other monkeys, among other things. I think you really want to ask how it happened that humans did not evolve as unconscious zombies. Why did evolution select consciousness? I think one good answer is that perhaps nature finds it cheaper when its creatures have first-person awareness of the things they do and say. We would probably find it more efficient in computers also. We just need to figure out what nature did, and then do something similar. -gts From gts_2000 at yahoo.com Sun Jan 3 21:07:10 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 13:07:10 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <543D4931-B505-4AC1-A793-42DB91FD2A5A@bellsouth.net> Message-ID: <822376.84128.qm@web36501.mail.mud.yahoo.com> --- On Sun, 1/3/10, John Clark wrote: > Darwin is screaming that intelligence and consciousness are > two sides of the same coin. You're screaming it but I don't hear Darwin screaming it. Again amoebas appear to have intelligence but most people including me would find themselves hard-pressed to say they have what I mean by consciousness. I think Darwin and Searle would enjoy each other's company and never find any reason for disagreement. Searle: "Some people think human brains work just like computers, but I reject the computationalist theory as false." Darwin: "What the heck is a computer, and why should anyone believe my theory of evolution gives a hoot about them?" Might make for an interesting conversation. -gts From brentn at freeshell.org Sun Jan 3 22:50:20 2010 From: brentn at freeshell.org (Brent Neal) Date: Sun, 3 Jan 2010 17:50:20 -0500 Subject: [ExI] Elemental abundances Message-ID: <2592DBB0-C6CC-45F8-A28F-D79A3A57C61D@freeshell.org> Does anyone have a lead on a source, preferably academic in quality, for the relative elemental abundances in the inner solar system out to Z=92? All the data I've been able to find thus far talks about either the Earth's crust or stops with Z=35. Thanks! Brent -- Brent Neal, Ph.D. http://brentn.freeshell.org From pharos at gmail.com Sun Jan 3 23:31:20 2010 From: pharos at gmail.com (BillK) Date: Sun, 3 Jan 2010 23:31:20 +0000 Subject: [ExI] Elemental abundances In-Reply-To: <2592DBB0-C6CC-45F8-A28F-D79A3A57C61D@freeshell.org> References: <2592DBB0-C6CC-45F8-A28F-D79A3A57C61D@freeshell.org> Message-ID: On 1/3/10, Brent Neal wrote: > Does anyone have a lead on a source, preferably academic in quality, for the > relative elemental abundances in the inner solar system out to Z=92? All the > data I've been able to find thus far talks about either the Earth's crust or > stops with Z=35. > > Does this help? Quote: Elements with atomic numbers 43, 61, 84?89, and 91 have no stable or long-lived isotopes, and therefore have vanishingly small abundances. ---------- BillK From gts_2000 at yahoo.com Mon Jan 4 02:01:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 3 Jan 2010 18:01:36 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <201790.46747.qm@web36504.mail.mud.yahoo.com> --- On Sun, 1/3/10, Stathis Papaioannou wrote: Revisiting this question: > Firstly, I understand that you have no philosophical > objection to the idea that the clockwork neurons *could* have > consciousness, but you don't think that they *must* have consciousness, > since you don't (to this point) believe as I do that behaving like normal > neurons is sufficient for this conclusion. Is that right? In my last to you I referred to the m-neurons actually used in the experiment. They either work in which case the patient passes the TT and reports normal intentionality and gets released from the hospital, or they don't. But in re-reading your words I understand that you really want to know if I agree that they needn't work or fail solely by virtue of their inputs and outputs. Yes I agree with that, as you already know. We simply do not know what neurons must contain to allow a brain to become conscious, but I'd bet that artificial neurons stuffed only with mashed potatoes and gravy won't do the trick, even if we engineer somehow at the edge to output the correct neurotransmitters into the synapses. > I actually believe that semantics can *only* come from > syntax, but if it can't, your fallback is that semantics > comes from the physical activity inside brains. Something along those lines, yes. But we can't paste form onto substance and expect intrinsic intentionality, and that's all formal programs do to hardware substance. We might just as well write a letter and expect the letter to understand the words. -gts From brentn at freeshell.org Mon Jan 4 02:23:44 2010 From: brentn at freeshell.org (Brent Neal) Date: Sun, 3 Jan 2010 21:23:44 -0500 Subject: [ExI] Elemental abundances In-Reply-To: References: <2592DBB0-C6CC-45F8-A28F-D79A3A57C61D@freeshell.org> Message-ID: <1294BDB8-F242-40DA-BE58-206654BF87AF@freeshell.org> On 3 Jan, 2010, at 18:31, BillK wrote: > Does this help? > > That's close to what I'm looking for. I may just have to pull a copy of the referenced book. I was hoping for something that referenced primary sources that had not only a good graph, but also measurement errors and distributions. Thanks for the link! B -- Brent Neal, Ph.D. http://brentn.freeshell.org From emlynoregan at gmail.com Mon Jan 4 06:42:12 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 4 Jan 2010 17:12:12 +1030 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Message-ID: <710b78fc1001032242j5b02824fxc64324484f0a5cfe@mail.gmail.com> > Again, it is just a game, invented my sociologist types. ?Notice also a > similarity with horoscopes: the description of each category is very general > and at least moderately flattering. ?An explanation of the popularity of the > game might be that everyone is pleased with the description they see. > > It would be cool to try to design a four-bit identifier game designed by > engineers and scientists. ?Secondly, as a little joke, make the description > of each category a biting criticism, such as the internet gag-horoscopes > that went around a few years ago, where the horoscopes started out with the > usual mush, but progressed toward ending comments such as "those who know > you well consider you an arrogant asshole." ?{8^D ?Does anyone here remember > that game? > > spike For a system designed by scientists (well, psychologists), how about the big 5 personality traits? http://en.wikipedia.org/wiki/Big_Five_personality_traits "The Big Five model is considered to be one of the most comprehensive, empirical, data-driven research findings in the history of personality psychology." Elements are: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From emlynoregan at gmail.com Mon Jan 4 06:52:37 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 4 Jan 2010 17:22:37 +1030 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <710b78fc1001032242j5b02824fxc64324484f0a5cfe@mail.gmail.com> References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> <710b78fc1001032242j5b02824fxc64324484f0a5cfe@mail.gmail.com> Message-ID: <710b78fc1001032252n5b301668iabc1e6dc779eb174@mail.gmail.com> 2010/1/4 Emlyn : >> Again, it is just a game, invented my sociologist types. ?Notice also a >> similarity with horoscopes: the description of each category is very general >> and at least moderately flattering. ?An explanation of the popularity of the >> game might be that everyone is pleased with the description they see. >> >> It would be cool to try to design a four-bit identifier game designed by >> engineers and scientists. ?Secondly, as a little joke, make the description >> of each category a biting criticism, such as the internet gag-horoscopes >> that went around a few years ago, where the horoscopes started out with the >> usual mush, but progressed toward ending comments such as "those who know >> you well consider you an arrogant asshole." ?{8^D ?Does anyone here remember >> that game? >> >> spike > > For a system designed by scientists (well, psychologists), how about > the big 5 personality traits? > > http://en.wikipedia.org/wiki/Big_Five_personality_traits > > "The Big Five model is considered to be one of the most comprehensive, > empirical, data-driven research findings in the history of personality > psychology." > > Elements are: Openness, Conscientiousness, Extraversion, > Agreeableness, and Neuroticism Oh, also, here's an online test: http://www.outofservice.com/bigfive/ And my results :-) http://www.outofservice.com/bigfive/results/?oR=0.95&cR=0.472&eR=0.562&aR=0.722&nR=0.281&y=1970&g=m -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From stefano.vaj at gmail.com Mon Jan 4 11:47:36 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 4 Jan 2010 12:47:36 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> Message-ID: <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> 2010/1/3 Stathis Papaioannou : > I think the argument from partial brain replacement that I have put > forward to Gordon shows that if you can reproduce the behaviour of the > brain, then you necessarily also reproduce the consciousness. > Simulating neurons and molecules is just a means to this end. "Consciousness" being hard to define as else than a social construct and a projection (and a pretty vague one, for that matter, inasmuch as it should be extensible to fruitflies...), the real point of the exercise is simply to emulate "organic-like" computational abilities with acceptable performances, brain-like architectures being demonstrably not too bad at the task. I do not really see anything that suggests that we could not do everything in software with a PC, a Chinese Room or a cellular automaton, without emulating *absolutely anything* of the actual working of brains... -- Stefano Vaj From stathisp at gmail.com Mon Jan 4 11:50:36 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jan 2010 22:50:36 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <867961.71243.qm@web36504.mail.mud.yahoo.com> References: <867961.71243.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/4 Gordon Swobe : > suggested abbreviations and conventions: > > m-neurons = material ("clockwork") artificial neurons > p-neurons = programmatic artificial neurons I'll add two more: b-neurons = biological neurons c-neurons = consciousness-capable neurons You claim: all b-neurons are c-neurons some m-neurons are c-neurons no p-neurons are c-neurons > Sam = the patient with the m-neurons > Cram = the patient with the p-neurons (CRA-man) > > (If Sam and Cram look familiar it's because I used these names in a similar thought experiment of my own design.) > >> Firstly, I understand that you have no philosophical >> objection to the idea that the clockwork neurons *could* have >> consciousness, but you don't think that they *must* have consciousness, >> since you don't (to this point) believe as I do that behaving like normal >> neurons is sufficient for this conclusion. Is that right? > > No, because I reject epiphenomenalism I think Sam cannot pass the TT without genuine intentionality. If Sam's m-neurons fail to result in a passing TT score for Sam then we have no choice but to take his m-neurons back to the store and demand a refund. It seems to me you must accept some type of epiphenomenalism if you say that Cram can pass the TT while having different experiences to Sam. This also makes it impossible to ever study the NCC scientifically. This experiment would be the ideal test for it: the p-neurons function like c-neurons but without the NCC, yet Cram behaves the same as Sam. There is therefore no way of knowing that you have actually taken out the NCC. >> Moreover, if consciousness is linked to substrate rather than function >> then it is possible that the clockwork neurons are conscious but with >> a different type of consciousness. > > If Sam passes the TT and reports normal subjective experiences from m-neurons then I will consider him cured. I have no concerns about "type" of consciousness. As you agreed in a later post, only some m-neurons are c-neurons. It could be that an internal change in a m-neuron could turn it from a c-neuron to a ~c-neuron. But it seems you are saying there is no in between state: it is either a c-neuron or a ~c-neuron. Moreover, you seem to be saying that there is only one type of c-neuron that could fill the shoes of the original b-neuron, although presumably there are different m-neurons that could give rise to this c-neuron. Is that right? >> Secondly, suppose we agree that clockwork neurons can give >> rise to consciousness. What would happen if they looked like >> conventional clockwork at one level but at higher resolution we could >> see that they were driven by digital circuits, like the digital mechanism >> driving most modern clocks with analogue displays? That is, would >> the low level computations going on in these neurons be enough to >> change or eliminate their consciousness? > > Yes. In that case the salesperson deceived us. He sold us p-neurons in a box labeled m-neurons. And if we cannot detect the digital nature of these neurons from careful physical inspection and must instead conceive of some digital platonic realm that drives or causes material objects then you will have introduced into our experiment the quasi-religious philosophical idea of substance or property dualism. Suppose the m-neuron (which is a c-neuron) contains a mechanism to open and close sodium channels depending on the transmembrane potential difference. Would changing from an analogue circuit to a digital circuit for just this mechanism change the neuron from a c-neuron to a ~c-neuron? If not, then we could go about systematically replacing the analogue subsystems in the neuron until we have a pure p-neuron. At some point, according to what you have been saying, the neuron would suddenly switch from being a c-neuron to a ~c-neuron. Is it plausible that changing, say, one op-amp out of billions would have such drastic effect? On the other hand, what could it mean if the neuron's (and hence the person's) consciousness smoothly decreased in proportion to its degree of computerisation? >> Finally, the most important point. The patient with the computerised >> neurons behaves normally and says he feels normal. > > Yes. > >> Moreover, he actually believes he feels normal and that he understands >> everything said to him, since otherwise he would tell us something is >> wrong. > > No, he does not "actually" believe anything. He merely reports that he feels normal and reports that he understands. His surgeon programmed all p-neurons such that he would pass the TT and report healthy intentionality, including but not limited to p-neurons in Wernicke's area. This is why the experiment considers *partial* replacement. Even before the operation Cram is not a zombie: despite not understanding language he can see, hear, feel, recognise people and objects, understand that he is sick in hospital with a stroke, and he certainly knows that he is conscious. After the operation he has the same feelings, but in addition he is pleased to find that he now understands what people say to him, just as he remembers before the stroke. That is, he behaves as if he understands what people say to him and he honestly believes that he understands what people say to him; whereas before the operation he behaves as if he lacks understanding and he knows that he lacks understanding, since when people speak to him it sounds like gibberish. So the post-op Cram is a very strange creature: he can have a normal conversation, appearing to understand everything said to him, honestly believing that he understands everything said to him, while in fact he doesn't understand a word. On the above account, it is difficult to make any sense of the word "understanding". Surely a person who believes he understands language and behaves as if he understands language does in fact understand language. If not, what more could you possibly require of him? You seem to understand me and (though I can't know another person's thoughts for sure) I take your word that you honestly believe you understand me, but this is exactly what would happen if you had been through Cram's operation as well; so it's possible that the ham sandwich you had for lunch yesterday destroyed the NCC in your language centre, and you just haven't noticed. The only other possibility if p-neurons are ~c-neurons is that Cram does in fact realise that he has no more understanding after the surgery than he did before, but can't do anything about it. He attempts to lash out and smash things in frustration but his body won't obey him, and he observes himself making meaningless noises which the treating team apparently understand to be some sort of thank-you speech. I believe that this is what Searle has said would happen, though it is some time since I came across the paper and I can't now find it. It would mean that Cram would be doing his thinking with something other than his brain, which is forced to behave as if everything was fine. So if p-neurons are ~c-neurons this leads to either partial zombies or extra-brain thought. There's no other way around it. Both possibilities are pretty weird, but I would say that the partial zombies offend logic while the extra-brain thought offends science. Do you still claim that the idea of a computer having a mind is more absurd than either of these two absurdities? -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 4 12:05:41 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jan 2010 23:05:41 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <201790.46747.qm@web36504.mail.mud.yahoo.com> References: <201790.46747.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/4 Gordon Swobe : >> I actually believe that semantics can *only* come from >> syntax, but if it can't, your fallback is that semantics >> comes from the physical activity inside brains. > > Something along those lines, yes. But we can't paste form onto substance and expect intrinsic intentionality, and that's all formal programs do to hardware substance. We might just as well write a letter and expect the letter to understand the words. Still, you have agreed that while programming is not sufficient for intelligence, it cannot prevent intelligence. So although you may have a strong hunch that p-neurons aren't c-neurons, you can't claim this with the force of logical necessity. And that's what would be required in order to justify the weirdness that the partial brain replacement experiment I have been describing would entail. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 4 13:13:14 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 00:13:14 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> Message-ID: 2010/1/4 Stefano Vaj : > 2010/1/3 Stathis Papaioannou : >> I think the argument from partial brain replacement that I have put >> forward to Gordon shows that if you can reproduce the behaviour of the >> brain, then you necessarily also reproduce the consciousness. >> Simulating neurons and molecules is just a means to this end. > > "Consciousness" being hard to define as else than a social construct > and a projection (and a pretty vague one, for that matter, inasmuch as > it should be extensible to fruitflies...), the real point of the > exercise is simply to emulate "organic-like" computational abilities > with acceptable performances, brain-like architectures being > demonstrably not too bad at the task. I can't define or even describe the taste of salt, but I know what I have to do in order to generate it, and I can tell you whether an unknown substance tastes salty or not. That's what I want to know about consciousness in general: I can't define or describe it, but I know it when I have it, and I would like to know if I would still have it after undergoing procedures such as brain replacement. > I do not really see anything that suggests that we could not do > everything in software with a PC, a Chinese Room or a cellular > automaton, without emulating *absolutely anything* of the actual > working of brains... There's no more reason why an AI should emulate a brain than there is why a submarine should emulate a fish. However, if you have had a stroke and need the damaged part of your brain replaced, then it would be important to simulate the workings of your brain as closely as possible. It is not clear at present down to what level the simulation needs to be. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Jan 4 14:01:39 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 4 Jan 2010 06:01:39 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <330161.3464.qm@web36506.mail.mud.yahoo.com> --- On Mon, 1/4/10, Stathis Papaioannou wrote: > It seems to me you must accept some type of epiphenomenalism if you say > that Cram can pass the TT while having different experiences to > Sam. I don't see how that follows, nor do I posit any guess as to their actual experiences (especially for Cram, who may have none by the time the doctors finish with him). I see this experiment in the medical context that you framed. As would happen in a real world hospital setting, the neurosurgeons work on these poor fellows with the alphabet soup neurons until they pass the TT and report normal subjective experiences. Cram's doctors have an extra luxury in that they can program the p-neurons to correct any lingering symptoms. If Sam's m-neurons fail, too bad for Sam. At no time can we really know what goes on in the patients' experiences, except that they report it to their doctors. > This also makes it impossible to ever study the NCC scientifically. > This experiment would be the ideal test for it: the p-neurons function > like c-neurons but without the NCC, yet Cram behaves the same as Sam. We needn't create artificial neurons to study the NCC. We need to identify possible target areas and then to test our theories with technology that switches it off and on in a live patient. Most likely it involves a large swath of neurons that need simultaneously to have the correct synaptic activity and (I would guess) electrical coherence or patterns of some kind. (My guess about the electrical activity helps explain why I reject your beer-cans-and-toilet-paper model of the brain.) >> If Sam passes the TT and reports normal subjective > experiences from m-neurons then I will consider him cured. I > have no concerns about "type" of consciousness. > > As you agreed in a later post, only some m-neurons are > c-neurons. It could be that an internal change in a m-neuron could > turn it from a c-neuron to a ~c-neuron. But it seems you are saying there > is no in between state: it is either a c-neuron or a ~c-neuron. I consider them not much different from b-neurons, and just as in b-neurons I would not rule out the possibility of dysfunctional but still operational ones. gotta run, more later... feel free to respond... -gts From stathisp at gmail.com Mon Jan 4 15:10:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 02:10:08 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <330161.3464.qm@web36506.mail.mud.yahoo.com> References: <330161.3464.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/5 Gordon Swobe : > --- On Mon, 1/4/10, Stathis Papaioannou wrote: > >> It seems to me you must accept some type of epiphenomenalism if you say >> that Cram can pass the TT while having different experiences to >> Sam. > > I don't see how that follows, nor do I posit any guess as to their actual experiences (especially for Cram, who may have none by the time the doctors finish with him). You alter Cram's consciousness, but it has no effect on his behaviour. This is the case if you systematically go about replacing neuron after neuron, until his whole brain is gone, and along with it his conscious. Therefore, consciousness has no effect on behaviour, at least in this case. >> This also makes it impossible to ever study the NCC scientifically. >> This experiment would be the ideal test for it: the p-neurons function >> like c-neurons but without the NCC, yet Cram behaves the same as Sam. > > We needn't create artificial neurons to study the NCC. We need to identify possible target areas and then to test our theories with technology that switches it off and on in a live patient. Most likely it involves a large swath of neurons that need simultaneously to have the correct synaptic activity and (I would guess) electrical coherence or patterns of some kind. (My guess about the electrical activity helps explain why I reject your beer-cans-and-toilet-paper model of the brain.) But how would we ever distinguish the NCC from something else that just had an effect on general neural function? If hypoxia causes loss of consciousness, that doesn't mean that the NCC is oxygen. -- Stathis Papaioannou From jonkc at bellsouth.net Mon Jan 4 16:49:44 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 11:49:44 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <392806.41976.qm@web36507.mail.mud.yahoo.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> Message-ID: On Jan 3, 2010, at 2:42 PM, Gordon Swobe wrote: >> The operative word in the above is "evolved". Why did this mysterious >> "subjective symbol grounding" (bafflegab translation: consciousness) >> evolve? > > To help you communicate better with other monkeys, among other things. So consciousness effects behavior and say goodbye to the Chinese room. > I think you really want to ask how it happened that humans did not evolve as unconscious zombies. Why did evolution select consciousness? I think one good answer is that perhaps nature finds it cheaper when its creatures have first-person awareness of the things they do and say. So it's easier to make a conscious intelligence than an unconscious one. > > We would probably find it more efficient in computers also. So if you ever run across a intelligent computer you can be certain its conscious. Or at least as certain as you are about your fellow human beings are conscious when they act intelligently. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 4 17:10:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 12:10:55 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <822376.84128.qm@web36501.mail.mud.yahoo.com> References: <822376.84128.qm@web36501.mail.mud.yahoo.com> Message-ID: <583AB203-5D73-4932-8930-9A1B93F39D25@bellsouth.net> On Jan 3, 2010, at 4:07 PM, Gordon Swobe wrote: > amoebas appear to have intelligence but most people including me would find themselves hard-pressed to say they have what I mean by consciousness. If you are willing to accept the fantastic premise that a amoeba is intelligent I don't understand why you wouldn't also accept the far more modest proposition that it is conscious. According to Evolution consciousness is easy but intelligence is hard; it took far longer to evolve one than the other. The parts of our brain responsible for the most intense emotions like pain fear anger and even love are many hundreds of millions of years old, but the parts responsible for higher intelligence of which we are so proud and which make our species unique are only about one million years old, perhaps less perhaps much less. > I think Darwin and Searle would enjoy each other's company Searle would enjoy talking with Darwin but I doubt the feeling would be reciprocated. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 4 18:18:18 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jan 2010 12:18:18 -0600 Subject: [ExI] effect/affect again In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com> Message-ID: <4B4230EA.3010605@satx.rr.com> On 1/4/2010 10:49 AM, John Clark wrote: > So consciousness effects behavior I know you get some weird pleasure out of butchering the language with this word, John, but I don't think *anyone* would make the universal claim that consciousness effects behavior. The majority of behavior is effected--caused to occur--by reflex, habit, and other non-conscious control systems (driving automatically while thinking of something else, hitting a ball when playing tennis, etc etc). Presumably you meant to write "affects behavior" which is obviously true--consciousness has *some* influence on behavior, but not all. The problem with playing games with accepted usage is that you can end up saying something stupid that you don't mean. Damien Broderick From jameschoate at austin.rr.com Mon Jan 4 18:33:58 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Mon, 4 Jan 2010 18:33:58 +0000 Subject: [ExI] effect/affect again In-Reply-To: <4B4230EA.3010605@satx.rr.com> Message-ID: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> It's worth mentioning that affect is a verb and effect is (usually) a noun. Affect is about cause, effect is about the thing being affected. ---- Damien Broderick wrote: > On 1/4/2010 10:49 AM, John Clark wrote: > > > So consciousness effects behavior > > I know you get some weird pleasure out of butchering the language with > this word, John, but I don't think *anyone* would make the universal > claim that consciousness effects behavior. The majority of behavior is > effected--caused to occur--by reflex, habit, and other non-conscious > control systems (driving automatically while thinking of something else, > hitting a ball when playing tennis, etc etc). Presumably you meant to > write "affects behavior" which is obviously true--consciousness has > *some* influence on behavior, but not all. > > The problem with playing games with accepted usage is that you can end > up saying something stupid that you don't mean. > -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From thespike at satx.rr.com Mon Jan 4 18:43:12 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jan 2010 12:43:12 -0600 Subject: [ExI] effect/affect again In-Reply-To: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> Message-ID: <4B4236C0.4040307@satx.rr.com> On 1/4/2010 12:33 PM, jameschoate at austin.rr.com wrote: > It's worth mentioning that affect is a verb and effect is (usually) a noun. > > Affect is about cause, effect is about the thing being affected. No, in the way John was using "effect" it was a verb, just the wrong verb. From pharos at gmail.com Mon Jan 4 18:57:42 2010 From: pharos at gmail.com (BillK) Date: Mon, 4 Jan 2010 18:57:42 +0000 Subject: [ExI] effect/affect again In-Reply-To: <4B4236C0.4040307@satx.rr.com> References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> <4B4236C0.4040307@satx.rr.com> Message-ID: On 1/4/10, Damien Broderick wrote: > No, in the way John was using "effect" it was a verb, just the wrong verb. > > In effect, it is just an affectation. BillK From sparge at gmail.com Mon Jan 4 19:16:25 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 4 Jan 2010 14:16:25 -0500 Subject: [ExI] effect/affect again In-Reply-To: References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> <4B4236C0.4040307@satx.rr.com> Message-ID: http://theoatmeal.com/comics/misspelling From gts_2000 at yahoo.com Mon Jan 4 20:09:50 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 4 Jan 2010 12:09:50 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <582583.74076.qm@web36504.mail.mud.yahoo.com> --- On Mon, 1/4/10, Stathis Papaioannou wrote: >> I don't see how that follows, nor do I posit any guess >> as to their actual experiences (especially for Cram, who may >> have none by the time the doctors finish with him). > > You alter Cram's consciousness, but it has no effect on his > behaviour. Yes, the initial operation most almost certainly affect his behavior before he leaves the hospital, causing him do and say strange things. His surgeon corrects those symptoms and side-effects with more programming/replacements with p-neurons until he can call his patient cured. But unbeknownst to the surgeon, who according to your experimental set up has no understanding of philosophy, the cured patient has no subjective experience. When I say I reject epiphenonemalism, I mean that I reject it as an explanation of normal human consciousness. Because I think consciousness plays a role in human behavior, I think Sam will fail his TT unless his m-neurons give him real consciousness. And unlike Cram's doctors, Sam's have no way to correct any side-effects if the m-neurons don't work as advertised. >> We needn't create artificial neurons to study the NCC. >> We need to identify possible target areas and then to test >> our theories with technology that switches it off and on in >> a live patient. > But how would we ever distinguish the NCC from something > else that just had an effect on general neural function? > If hypoxia causes loss of consciousness, that doesn't mean that > the NCC is oxygen. We know ahead of time that the presence of oxygen will play a critical role. Let us say we think neurons in brain region A play the key role in consciousness. If we do not shut off the supply of oxygen but instead shut off the supply of XYZ to region A, and the patient loses consciousness, we then have reason to say that oxygen, XYZ and the neurons in region A play important roles in consciousness. We then test many similar hypotheses with many similar experiments until we have a complete working hypothesis to explain the NCC. At the end of our research project we should have a reasonable theory that explains why George Foreman fell to the mat and could not get up after Muhammad Ali clobbered him in the 8th round in the Rumble in the Jungle. That happened over 30 years ago, and still nobody knows. -gts From spike66 at att.net Mon Jan 4 21:04:10 2010 From: spike66 at att.net (spike) Date: Mon, 4 Jan 2010 13:04:10 -0800 Subject: [ExI] effect/affect again In-Reply-To: References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02><4B4236C0.4040307@satx.rr.com> Message-ID: <9AB494A79BAC4691AC83EC32CCDF5D0D@spike> > Subject: Re: [ExI] effect/affect again > > On 1/4/10, Damien Broderick wrote: > > No, in the way John was using "effect" it was a verb, just > the wrong verb. > > > > In effect, it is just an affectation. BillK Fortunately, Damien is an affable character, even if at times ineffable. {8^D spike From jonkc at bellsouth.net Mon Jan 4 21:39:14 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 16:39:14 -0500 Subject: [ExI] effect/affect again In-Reply-To: <9AB494A79BAC4691AC83EC32CCDF5D0D@spike> References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02><4B4236C0.4040307@satx.rr.com> <9AB494A79BAC4691AC83EC32CCDF5D0D@spike> Message-ID: <014A7D44-C323-482C-BF40-3537A46F37BB@bellsouth.net> On Jan 4, 2010, spike wrote: > Fortunately, Damien is an affable character, even if at times ineffable. And redoubtable too when he wasn't being inscrutable. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Jan 4 21:41:53 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 4 Jan 2010 13:41:53 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <523651.30036.qm@web36508.mail.mud.yahoo.com> --- On Mon, 1/4/10, Stathis Papaioannou wrote: > Moreover, you seem to be saying that there is only one type of c-neuron > that could fill the shoes of the original b-neuron, although > presumably there are different m-neurons that could give rise to this > c-neuron. Is that right? 1. I think b-neurons work as c-neurons in the relevant parts of the brain. 2. I think all p-neurons work as ~c-neurons in the relevant parts of the brain. 3. I annoy Searle, but do not I think fully disclaim his philosophy, by hypothesizing that some possible m-neurons work like c-neurons. Does that answer your question? > Suppose the m-neuron (which is a c-neuron) contains a > mechanism to open and close sodium channels depending on the > transmembrane potential difference. Would changing from an analogue > circuit to a digital circuit for just this mechanism change the neuron > from a c-neuron to a ~c-neuron? Philosophically, yes. In practical sense? Probably not in any detectable way. But you've headed down a slippery slope that ends with describing real natural brains as digital computers. I think you want to go there, (and speaking as an extropian I certainly don't blame you for wanting to) and if so then perhaps we should just cut to the chase and go there to see if the idea actually works. >> No, he does not "actually" believe anything. He merely > reports that he feels normal and reports that he > understands. His surgeon programmed all p-neurons such that > he would pass the TT and report healthy intentionality, > including but not limited to p-neurons in Wernicke's area. > > This is why the experiment considers *partial* replacement. > Even before the operation Cram is not a zombie: despite not > understanding language he can see, hear, feel, recognise people and > objects, understand that he is sick in hospital with a stroke, and > he certainly knows that he is conscious. After the operation he has the > same feelings, but in addition he is pleased to find that he > now understands what people say to him, just as he remembers > before the stroke. I think that after the initial operation he becomes a complete basket-case requiring remedial surgery, and that in the end he becomes a philosophical zombie or something very close to one. If his surgeon has experience then he becomes a zombie or near zombie on day one. -gts From jonkc at bellsouth.net Mon Jan 4 21:27:14 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 16:27:14 -0500 Subject: [ExI] effect/affect again. In-Reply-To: <4B4230EA.3010605@satx.rr.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> Message-ID: <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> On Jan 4, 2010, Damien Broderick wrote: >> So consciousness effects behavior > > I know you get some weird pleasure out of butchering the language with this word, John, I don't believe I've ever used the word "affect" in my life, I have mentioned it a few times in the past but only when somebody (it may even have been you) accused me of using "effect" too much; but I've always had a fondness for cause and effect and I figure if effect is good enough for cause it's good enough for me. And besides, this entire debate is between those who think the human will is fundamentally different from other kinds of events and those like me who disagree. I think "affectation" still has a place in the English language but "affect" should die and join other extinct words in that great dictionary in the sky, words like "methinks","cozen","fardel", "huggermugger", "zounds" and "typewriter". > but I don't think *anyone* would make the universal claim that consciousness effects behavior. I'm someone and I think consciousness effects behavior, if it didn't we wouldn't have it; at least that's what I think and Darwin agrees with me. As I said before, saying I scratched my nose because I wanted to is a perfectly valid thing to say, as is saying that the balloon expanded because the pressure inside it increased; I do however insist that there is more than one way to correctly describe both of those events. > Presumably you meant to write "affects behavior" You presume incorrectly, I meant to say "effects behavior" and that is exactly what I said. > consciousness has *some* influence on behavior, but not all. If A effects B there is no reason C,D,E and F couldn't effect B too. In fact logically it could be that nothing effects B at all but B changes anyway. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Jan 4 22:07:48 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 4 Jan 2010 14:07:48 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <583AB203-5D73-4932-8930-9A1B93F39D25@bellsouth.net> Message-ID: <579515.50389.qm@web36501.mail.mud.yahoo.com> --- On Mon, 1/4/10, John Clark wrote: > If you are willing to accept the fantastic premise that a amoeba is > intelligent Why do you consider it such a fantastic premise? Amoebas and other such organisms can find food and so on. Sure looks like intelligence to me. Too bad those unconscious critters can't know that I hold them in such high esteem. -gts From pharos at gmail.com Mon Jan 4 22:17:29 2010 From: pharos at gmail.com (BillK) Date: Mon, 4 Jan 2010 22:17:29 +0000 Subject: [ExI] effect/affect again. In-Reply-To: <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: On 1/4/10, John Clark wrote: > You presume incorrectly, I meant to say "effects behavior" and that is > exactly what I said. > > If A effects B there is no reason C,D,E and F couldn't effect B too. In fact > logically it could be that nothing effects B at all but B changes anyway. > > Damien, I don't think your protestations are going to affect John. His behavior remains unaffected by your ineffectual protests. He is effectively immune. BillK From thespike at satx.rr.com Mon Jan 4 22:27:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jan 2010 16:27:57 -0600 Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: <4B426B6D.4080704@satx.rr.com> On 1/4/2010 4:17 PM, BillK wrote: > I don't think your protestations are going to affect John. > He is effectively immune. Yes, as you suggested previously, it's an affectation. Better than word-blindness, I suppose, but mot wery afficient for conveying ontended meaming. From jonkc at bellsouth.net Mon Jan 4 22:55:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 4 Jan 2010 17:55:32 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <579515.50389.qm@web36501.mail.mud.yahoo.com> References: <579515.50389.qm@web36501.mail.mud.yahoo.com> Message-ID: On Jan 4, 2010, Gordon Swobe wrote: > Why do you consider it such a fantastic premise? Amoebas and other such organisms can find food and so on. Sure looks like intelligence to me. Well I admit it does a little bit seem like intelligence to me too, but only a little bit, if a computer had done the exact same thing rather than a amoeba you would be screaming that it has nothing to do with intelligence it's just programing. But never mind, on a scale of zero to 100,000,000,000 on the intelligence scale with me being 80 and Searle being 49 I'd put amoebas at .000000000000000001 on that same intelligence scale. I'd put that same amoeba at .01 on the consciousness scale. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Mon Jan 4 22:57:25 2010 From: aware at awareresearch.com (Aware) Date: Mon, 4 Jan 2010 14:57:25 -0800 Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: On Mon, Jan 4, 2010 at 2:17 PM, BillK wrote: > I don't think your protestations are going to affect John. There once was a man named John Clark, whose bite was as good as his bark. When you said "the effect" he heard "defect" which proceeded to ignite a new spark... which will have little effect on the endless Gordian knot composed of these threads. One bold stroke of insight is all that is required to escape the Gordian knot (Gordon, Guardian, not?) but while the discussion has had some effect on observers' observed affect, it has yet to affect observations on the recursive relationship of the observer to the observed. - Jef From aware at awareresearch.com Tue Jan 5 00:00:05 2010 From: aware at awareresearch.com (Aware) Date: Mon, 4 Jan 2010 16:00:05 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <501890.28524.qm@web113619.mail.gq1.yahoo.com> References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Message-ID: On Sun, Jan 3, 2010 at 7:36 AM, Ben Zaiboc wrote: > I'm not at all convinced by these personality tests. ?Every time I've tried a Myers-Briggs test (being just as vain as everyone else), I've got a different result. I've taken 4 over 20+ years. Three sponsored by business/management seminars and one as part of a college course that I took with one of our kids. Mine have always indicated INTJ, with most of the scores moving closer to center. > So far, I'm INTP, INFJ, and INFP, so rather than an xNTx, I seem to be a INxx. Does that mean anything? That you're not good at taking tests? > Also, the summaries remind me more of a horoscope than anything. ?Why do I never read anything bad about myself? ?That's suspicious, I'm not so vain as to think I don't have bad points. Well, quite a lot of statistical effort was applied to MBTI and businesses find utility in them for training people how to recognize differences and find ways to relate to other temperaments. But as Emlyn points out, the Big 5, system has superseded MBTI for academic work. You should note, however, that the descriptions do not say anything "bad" about any of the types, because it's incoherent to say there is anything intrinsically bad about the true nature of anything. That said, they point out plenty of propensities. Compare ISTJ (Gordon, presumably, but with high certainty) with INTJ (Jef), per the Wikipedia descriptions: ISTJ , ------- "ISTJs are faithful, logical, organized, sensible, and earnest traditionalists." "...prefer concrete and useful applications and will tolerate theory only if it leads to these ends." "Material that seems too easy or too enjoyable leads ISTJs to be skeptical of its merit." "...they resist putting energy into things that don't make sense to them..." "They have little use for theory or abstract thinking, unless the practical application is clear." INTJ , ------- "INTJs apply (often ruthlessly) the criterion "Does it work?" to everything from their own research efforts to the prevailing social norms." "...an unusual independence of mind, freeing the INTJ from the constraints of authority, convention, or sentiment for its own sake..." "...known as the "Systems Builders" of the types, perhaps in part because they possess the unusual trait combination of imagination and reliability." "...seek new angles or novel ways of looking at things. They enjoy coming to new understandings...." "They harbor an innate desire to express themselves by conceptualizing their own intellectual designs." Can you see from the above why I might view Gordon (and Lee) as puzzles, while they might see me as an unfathomable irritant? Would members of this list have any trouble deciding between Max and Natasha which is the likely INTJ and which is the likely ENFP? - Jef From spike66 at att.net Tue Jan 5 00:01:21 2010 From: spike66 at att.net (spike) Date: Mon, 4 Jan 2010 16:01:21 -0800 Subject: [ExI] effect/affect again. In-Reply-To: <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> References: <392806.41976.qm@web36507.mail.mud.yahoo.com><4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: On Behalf Of John Clark ...I think "affectation" still has a place in the English language but "affect" should die and join other extinct words in that great dictionary in the sky, words like "methinks","cozen","fardel", "huggermugger", "zounds" and "typewriter"...John K Clark Methinks otherwise John. I like the word methinks, and use it occasionally. I actually agree that the words affect and effect are a flaw in the language. Words that almost rhyme should have very different meanings and usages; those two invite conflation methinks. I also want to keep fardel. I didn't know what that was until I looked it up. Now I shall try to use it. I propose a game of balderdash. Perhaps you know of it: the players are given obscure English words, and they make up definitions, and try to fool the other players into choosing their definitions over the others, or the real one. I will not play on fardel, since I looked it up. But here are my other plays: cozen: the group with which one meditates huggermugger: one who attempts to take money by force from environmentalists zounds: the noise often emitted by sleepers typewriter: you stumped me on that one, never heard of it. Actually I must be honest and disqualify myself from this one, for I am one who not only knows what is a typewriter, but actually used one, in college. I can out-geezer almost everyone here by having used the kind (in college!) which does not plug in to the wall. Perhaps only Damien can match this, methinks. spike From rafal.smigrodzki at gmail.com Tue Jan 5 01:55:27 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 4 Jan 2010 20:55:27 -0500 Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> On Mon, Jan 4, 2010 at 7:01 PM, spike wrote: > > Actually I must be honest and disqualify myself from this one, for I am one > who not only knows what is a typewriter, but actually used one, in college. > I can out-geezer almost everyone here by having used the kind (in college!) > which does not plug in to the wall. ?Perhaps only Damien can match this, > methinks. ### I can match you on that one: I learned to type on a manual typewriter, owned by my father by special dispensation of the United Polish Communist Worker Party in the late 70's. Now beat this: I have helped my mother wring laundry using a hand-crank operated mangle attached to the non-automatic washing machine we had in the early 70's. Rafal From thespike at satx.rr.com Tue Jan 5 02:10:44 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jan 2010 20:10:44 -0600 Subject: [ExI] mangle In-Reply-To: <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> Message-ID: <4B429FA4.2000704@satx.rr.com> On 1/4/2010 7:55 PM, Rafal Smigrodzki wrote: > Now beat this: I have helped my mother wring laundry using a > hand-crank operated mangle attached to the non-automatic washing > machine we had in the early 70's. Ha! I helped my mother haul out washing to the line to dry after she'd *boiled it in the copper* (as it was called) long before we had a washing machine. Damien Broderick From max at maxmore.com Tue Jan 5 02:14:26 2010 From: max at maxmore.com (Max More) Date: Mon, 04 Jan 2010 20:14:26 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: <201001050221.o052LEsG011499@andromeda.ziaspace.com> Jef: >Would members of this list have any trouble deciding between Max and >Natasha which is the likely INTJ and which is the likely ENFP? Nope. I always come out solidly INTP. (Unless I remember wrongly; it's been years since I last took the test; but I'm pretty sure that's right.) Max From max at maxmore.com Tue Jan 5 02:26:33 2010 From: max at maxmore.com (Max More) Date: Mon, 04 Jan 2010 20:26:33 -0600 Subject: [ExI] mangle Message-ID: <201001050226.o052QiLw025830@andromeda.ziaspace.com> Damien wrote: >Ha! I helped my mother haul out washing to the line to dry after she'd >*boiled it in the copper* (as it was called) long before we had a >washing machine. Washing line? You're bloody lucky (pron. "looky")! In *my* day, we didn't have no washin' lines. We 'ad to hold the wet clothes for several hours, standing on one foot, as we blowed on it manually to help the evaporation. And it we didn't go it right, my mam would chop us into pieces and feed us to me dad for dinner. Forgive me. http://www.youtube.com/watch?v=Xe1a1wHxTyo Max From aware at awareresearch.com Tue Jan 5 02:34:40 2010 From: aware at awareresearch.com (Aware) Date: Mon, 4 Jan 2010 18:34:40 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <201001050221.o052LEsG011499@andromeda.ziaspace.com> References: <201001050221.o052LEsG011499@andromeda.ziaspace.com> Message-ID: On Mon, Jan 4, 2010 at 6:14 PM, Max More wrote: >> Would members of this list have any trouble deciding between Max and >> Natasha which is the likely INTJ and which is the likely ENFP? > > Nope. I always come out solidly INTP. (Unless I remember wrongly; it's been > years since I last took the test; but I'm pretty sure that's right.) I honestly wasn't sure about the P/J dimension in your case. Thanks, - Jef From max at maxmore.com Tue Jan 5 02:10:48 2010 From: max at maxmore.com (Max More) Date: Mon, 04 Jan 2010 20:10:48 -0600 Subject: [ExI] effect/affect again. Message-ID: <201001050237.o052bcnD029935@andromeda.ziaspace.com> >Actually I must be honest and disqualify myself from this one, for I >am one who not only knows what is a typewriter, but actually used >one, in college. I can out-geezer almost everyone here by having >used the kind (in college!) which does not plug in to the >wall. Perhaps only Damien can match this, methinks. > >spike My *dear* fellow. I'll have you know that I wrote my undergraduate thesis using a typewriter. (Something on eudaimonic egoism..., around 1986.) I also created three issues of a quite fancy comics fanzine called Starlight in 1979 and 1980 -- with hand-justified columns and some experimental, slanted column layouts, entirely using a typewriter and hand-spacing to achieve justified columns. (You might reply that the effect was a mere affectation, but it was still an effort incomparable to anything post-computer.) Max From spike66 at att.net Tue Jan 5 05:09:44 2010 From: spike66 at att.net (spike) Date: Mon, 4 Jan 2010 21:09:44 -0800 Subject: [ExI] kepler finds a couple of hot objects Message-ID: <95CE8F580E874D159AA4207317F56F10@spike> This is cool: http://www.foxnews.com/scitech/2010/01/04/planet-hunting-telescope-unearths- hot-mysteries-space/?test=latestnews This comment gave me a good harrr har: How hot? Try 26,000 degrees Fahrenheit (14,425 Celsius). That is hot enough to melt lead or iron. Ummm, yes. That would be plenty hot enough to not only melt but boil everything on the chart, and still have plenty of degrees to spare. {8^D spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Tue Jan 5 05:41:10 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 21:41:10 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> Message-ID: <4B42D0F6.8020003@rawbw.com> Stefano and Stathis, respectively, wrote: > "Consciousness" being hard to define as else than a social construct > and a projection (and a pretty vague one, for that matter, inasmuch as > it should be extensible to fruitflies...), the real point of the > exercise is simply to emulate "organic-like" computational abilities > with acceptable performances, brain-like architectures being > demonstrably not too bad at the task. What the key question is, is whether or not you would choose to be uploaded given a preview of the resulting machinery. It's what all these discussions are really all about. As for me, so long as there is a *causal* mechanism (i.e. information flow from state to state, with time being a key element), and it will produce behavior that is within the range of normal behavior for me, then I'm on board. Stathis: > I can't define or even describe the taste of salt, but I know what I > have to do in order to generate it, and I can tell you whether an > unknown substance tastes salty or not. That's what I want to know > about consciousness in general: I can't define or describe it, but I > know it when I have it, and I would like to know if I would still have > it after undergoing procedures such as brain replacement. Yes, that's it. It is logically conceivable, after all, as several on this list maintain, that every time you replace any biologically operating part with a mechanism that, say, does not involve chemical transformations, then your experience is diminished proportionally, with the end result that any non-biological entity actually has none of this consciousness you refer to. While *logically* possible, of course, I consider this possibility very remote. Lee From lcorbin at rawbw.com Tue Jan 5 05:52:44 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 21:52:44 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> Message-ID: <4B42D3AC.6090504@rawbw.com> Jef wrote at 1/2/2010 12:09 PM: > [Lee wrote] > >> Let's suppose for a moment that [the skeptical view] is right. >> In other words, internal mechanisms of the neuron must also be >> simulated. > > Argh,"turtles all the way down", indeed. Then must nature also > compute the infinite expansion of the digits of pi for every soap > bubble as well? Well, as you know, in one sense nature does compute infinite expansions---but not in a very useful sense. It's annoying that nature exactly solves the Schr?dinger differential equation for the helium atom whereas we cannot. >> ...if presented with two simulations >> only one of which is a true emulation, and they're both >> exhibiting behavior indicating extreme pain, we want to >> focus all relief efforts only on the one. We really do >> *not* care a bit about the other. > > This way too leads to contradiction, for example in the case of a > person tortured, then with memory erased, within a black box. I do not see any contradiction here. I definitely do not want that experience whether or not memories are erased, nor, in my opinion, would it be moral for me to sanction it happening to someone else. I consider the addition or deletion of memories per se as not affecting the total benefit over some interval to an entity. Yes, sometimes memory erasure might make certain conditions livable, and certain other memory additions might even produce fond reminisces. > The morality of any act depends not on the **subjective** state of > another, which by definition one could never know, but on our > assessment of the rightness, in principle, of the action, in terms of > our values. Yes, we're always guessing (though with pretty good guesses in my opinion), about what others experience. >> For those of us who are functionalists (or, in my case, almost >> 100% functionalists), it seems almost inconceivable that the causal >> components of an entity's having an experience require anything >> beneath the neuron level. In fact, it's very likely that the >> simulation of whole neuron tracks or bundles suffice. > > Let go of the assumption of an **essential** consciousness, and you'll > see that your functionalist perspective is entirely correct, but it > needs only the level of detail, within context, to evoke the > appropriate responses of the observer. To paraphrase John Clark, > "swiftness" is not in the essence of a car, and the closer one looks > the less apt one is to find it. Furthermore (and I realize that John > didn't say /this/), a car displays "swiftness" only within an > appropriate context. But key is understanding is that this > "swiftness" (separate from formal descriptions of rotational velocity, > power, torque, etc.) is a function of the observer. But this makes it sound, to me, that you're going right back to a "subjective" consideration, namely, this time around, in the mind of an observer. So if A and B are your observers, then whether or not true suffering is occurring to C is a function of A or B? > Happy New Year, Lee. Thanks. Happy New Year to you too! Lee From lcorbin at rawbw.com Tue Jan 5 06:03:01 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 22:03:01 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: References: <501890.28524.qm@web113619.mail.gq1.yahoo.com> Message-ID: <4B42D615.1070306@rawbw.com> Jef writes > INTJ , > > ------- > "INTJs apply (often ruthlessly) the criterion "Does it work?" to > everything from their own research efforts to the prevailing social > norms." > "...an unusual independence of mind, freeing the INTJ from the > constraints of authority, convention, or sentiment for its own > sake..." > "...known as the "Systems Builders" of the types, perhaps in part > because they possess the unusual trait combination of imagination and > reliability." > "...seek new angles or novel ways of looking at things. They enjoy > coming to new understandings...." > "They harbor an innate desire to express themselves by conceptualizing > their own intellectual designs." > > Can you see from the above why I might view Gordon (and Lee) as > puzzles, while they might see me as an unfathomable irritant? Once again, don't confuse objectivity and ("to see" or "view") and subjectivity. Objectively, you are absolutely an unfathomable irritant, as you put it, no question about it. > Would members of this list have any trouble deciding between Max and > Natasha which is the likely INTJ and which is the likely ENFP? I don't suppose anyone would :-) Incidentally, many years ago everyone (including me) that I was well-acquainted with was INTP. (But I was almost a J.) Now all my acquaintances are INTJ. Do we get more judgmental, or decisive, or something as we age, or do you suppose that the same kinds of people 25 years ago that were INTP are now INTJ? Maybe we've put a decade or two more between us and the obscurantism of the sixties. Lee From lcorbin at rawbw.com Tue Jan 5 06:09:13 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 22:09:13 -0800 Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com><4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: <4B42D789.2070707@rawbw.com> spike wrote: > typewriter: you stumped me on that one, never heard of it. > > Actually I must be honest and disqualify myself from this one, for I am one > who not only knows what is a typewriter, but actually used one, in college. > I can out-geezer almost everyone here by having used the kind (in college!) > which does not plug in to the wall. Perhaps only Damien can match this, > methinks. I actually owned one of the devices! Paying $200 or so for it, back when that was real money. Perhaps we should, following a hint from Rafal, just use "uffect" in place of those two words people hopelessly confuse (mostly either because they're too lazy or, like John Clark, suffer from congenital stubbornness). But I'm sure that all the traditionalists like Damien would just have a cow at this neologism. Lee From lcorbin at rawbw.com Tue Jan 5 06:15:22 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 04 Jan 2010 22:15:22 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <201001050221.o052LEsG011499@andromeda.ziaspace.com> References: <201001050221.o052LEsG011499@andromeda.ziaspace.com> Message-ID: <4B42D8FA.4010108@rawbw.com> Max More wrote: > I always come out solidly INTP. (Unless I remember wrongly; it's > been years since I last took the test; but I'm pretty sure that's right.) I bet if you take it again, you'll now come out INTJ. That's what happened to me. Lee From rafal.smigrodzki at gmail.com Tue Jan 5 06:41:43 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 5 Jan 2010 01:41:43 -0500 Subject: [ExI] mangle In-Reply-To: <4B429FA4.2000704@satx.rr.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> <4B429FA4.2000704@satx.rr.com> Message-ID: <7641ddc61001042241tb648304mf42fa6778422d651@mail.gmail.com> On Mon, Jan 4, 2010 at 9:10 PM, Damien Broderick wrote: > On 1/4/2010 7:55 PM, Rafal Smigrodzki wrote: > >> Now beat this: I have helped my mother wring laundry using a >> hand-crank operated mangle attached to the non-automatic washing >> machine we had in the early 70's. > > Ha! I helped my mother haul out washing to the line to dry after she'd > *boiled it in the copper* (as it was called) long before we had a washing > machine. ### OK, so match me that: I was scurrying around (being a toddler in the late 60's) as my grammy was stomping the cabbage - mixing sauerkraut in a large tub by stomping it with bare feet, after the old Silesian sauerkraut foot-based seasoning fashion. Rafal From stathisp at gmail.com Tue Jan 5 07:15:50 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 18:15:50 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <4B42D0F6.8020003@rawbw.com> References: <68150.34111.qm@web36508.mail.mud.yahoo.com> <4B3F9D05.40903@rawbw.com> <580930c21001040347m7ef90fb6g570bfbd4029bb4ec@mail.gmail.com> <4B42D0F6.8020003@rawbw.com> Message-ID: 2010/1/5 Lee Corbin : > Yes, that's it. It is logically conceivable, after all, as > several on this list maintain, that every time you replace > any biologically operating part with a mechanism that, say, > does not involve chemical transformations, then your > experience is diminished proportionally, with the end > result that any non-biological entity actually has none > of this consciousness you refer to. While *logically* > possible, of course, I consider this possibility very > remote. If your language centre were zombified, you would be able to participate normally in a conversation and you would honestly believe that you understood everything that was said to you, but in fact you would understand nothing. It's possible that you have a zombified language centre right now, a side-effect of the sandwich you had for lunch yesterday. You wouldn't know it, and even if it were somehow revealed to you, there wouldn't be any good reason to avoid those sandwiches in future. If you think that such a distinction between true experience and zombie experience is incoherent, then arguably it is not even logically possible for artificial neurons to be functionally identical to normal neurons but lack the requirements for consciousness. -- Stathis Papaioannou From max at maxmore.com Tue Jan 5 07:44:11 2010 From: max at maxmore.com (Max More) Date: Tue, 05 Jan 2010 01:44:11 -0600 Subject: [ExI] kepler finds a couple of hot objects Message-ID: <201001050744.o057iI0S021981@andromeda.ziaspace.com> spike posted: > >For now, NASA researcher Jason Rowe, who found the objects, said > he calls them "hot companions." I'm going to bed now, before I get too excited. Thanks for posting that, spike. Interesting. Max From max at maxmore.com Tue Jan 5 07:55:04 2010 From: max at maxmore.com (Max More) Date: Tue, 05 Jan 2010 01:55:04 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: <201001050755.o057tCga000596@andromeda.ziaspace.com> Lee Corbin wrote: >Max More wrote: > > > I always come out solidly INTP. (Unless I remember wrongly; it's > > been years since I last took the test; but I'm pretty sure that's right.) > >I bet if you take it again, you'll now come out INTJ. >That's what happened to me. Well, I was wondering about that, Lee. I wouldn't mind taking the test again to see if my type has shifted. I think I may well be more J-ish, but the test should give a better indication. I can take the test in my copy of David Keirsey's book, "Please Understand Me II: Temperament, Character, Intelligence", but do you know of a better (more thorough and/or updated version) online? Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From stathisp at gmail.com Tue Jan 5 09:23:44 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 20:23:44 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <523651.30036.qm@web36508.mail.mud.yahoo.com> References: <523651.30036.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/5 Gordon Swobe : > --- On Mon, 1/4/10, Stathis Papaioannou wrote: > >> Moreover, you seem to be saying that there is only one type of c-neuron >> that could fill the shoes of the original b-neuron, although >> presumably there are different m-neurons that could give rise to this >> c-neuron. Is that right? > > 1. I think b-neurons work as c-neurons in the relevant parts of the brain. > > 2. I think all p-neurons work as ~c-neurons in the relevant parts of the brain. > > 3. I annoy Searle, but do not I think fully disclaim his philosophy, by hypothesizing that some possible m-neurons work like c-neurons. > > Does that answer your question? Is there only one type of c-neuron or is it possible to insert m-neurons which, though they are functionally identical to b-neurons, result in a different kind of consciousness? >> Suppose the m-neuron (which is a c-neuron) contains a >> mechanism to open and close sodium channels depending on the >> transmembrane potential difference. Would changing from an analogue >> circuit to a digital circuit for just this mechanism change the neuron >> from a c-neuron to a ~c-neuron? > > Philosophically, yes. In practical sense? Probably not in any detectable way. But you've headed down a slippery slope that ends with describing real natural brains as digital computers. I think you want to go there, (and speaking as an extropian I certainly don't blame you for wanting to) and if so then perhaps we should just cut to the chase and go there to see if the idea actually works. Philosophy has to give an answer that's in accordance with what would actually happen, what you would actually experience, otherwise it's worse than useless. The discussion we have been having is an example of a philosophical problem with profound practical consequences. If I get a new super-fast computerised brain and you're right I would be killing myself, whereas if I'm right I would become an immortal super-human. I think it's important to be sure of the answer before going ahead! You shouldn't dismiss the slippery slope argument so quickly. Either you suddenly become a zombie when a certain proportion of your neurons internal workings are computerised or you don't. If you don't, then the option is that you don't become zombified at all or that you become zombified in proportion to how much of the neurons is computerised. Either sudden or gradual zombification seems implausible to me. The only plausible alternative is that you don't become zombified at all. >>> No, he does not "actually" believe anything. He merely >> reports that he feels normal and reports that he >> understands. His surgeon programmed all p-neurons such that >> he would pass the TT and report healthy intentionality, >> including but not limited to p-neurons in Wernicke's area. >> >> This is why the experiment considers *partial* replacement. >> Even before the operation Cram is not a zombie: despite not >> understanding language he can see, hear, feel, recognise people and >> objects, understand that he is sick in hospital with a stroke, and >> he certainly knows that he is conscious. After the operation he has the >> same feelings, but in addition he is pleased to find that he >> now understands what people say to him, just as he remembers >> before the stroke. > > I think that after the initial operation he becomes a complete basket-case requiring remedial surgery, and that in the end he becomes a philosophical zombie or something very close to one. If his surgeon has experience then he becomes a zombie or near zombie on day one. I don't understand why you say this. Perhaps I haven't explained what I meant well. The p-neurons are drop-in replacements for the b-neurons, just like pulling out the LM741 op amps in a piece of audio equipment and replacing them with TL071's. The TL071 performs the same function as the 741 and has the same pin-out, so the equipment will function just the same, even though the internal circuitry of the two IC's is quite different. You need know nothing at all about the insides of op amps to use them or find replacements for them in a circuit: as long as the I/O behaviour is the same, they one could be driven by vacuum tubes and the other by little demons and the circuit would work just fine in both cases. It's the same with the p-neurons. The manufacturer guarantees that the I/O behaviour of a p-neuron is identical to that of the b-neuron that it replaces, but that's all that is guaranteed: the manufacturer neither knows nor cares about consciousness, understanding or intentionality. Now, isn't it clear from this that Cram must behave normally and must (at least) have normal experiences in the parts of his brain which aren't replaced, given that he wasn't a zombie before the operation? If Cram has neurons in his language centre replaced then he must be able to communicate normally and respond to verbal input normally in every other way: draw a picture, laugh with genuine amusement at a joke, engage in philosophical debate. He must also genuinely believe that he understands everything, since if he didn't he would tell us. So you are put in a position where you have to maintain that Cram behaves as if he has understanding and genuinely believes that he has understanding, while in fact he doesn't understand anything. Is this position coherent? -- Stathis Papaioannou From pharos at gmail.com Tue Jan 5 09:59:01 2010 From: pharos at gmail.com (BillK) Date: Tue, 5 Jan 2010 09:59:01 +0000 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <201001050755.o057tCga000596@andromeda.ziaspace.com> References: <201001050755.o057tCga000596@andromeda.ziaspace.com> Message-ID: On 1/5/10, Max More wrote: > Lee Corbin wrote: > > I bet if you take it again, you'll now come out INTJ. > > That's what happened to me. > > > > I wouldn't mind taking the test again to see if my type has shifted. I > think I may well be more J-ish, but the test should give a better > indication. I can take the test in my copy of David Keirsey's book, "Please > Understand Me II: Temperament, Character, Intelligence", but do you know of > a better (more thorough and/or updated version) online? > > As Ben mentioned, these personality types come out sounding like astrology descriptions. (Don't say anything bad in case you drive the paying customers away). The fact that dating and match-making sites use them is another minus point. Perhaps Lee is now INTJ because he's turned into a boring old fart? :) How about these descriptions? INTJ People hate you. I mean, you're pretty damn clever and you know it. You love to flaunt your potential. Heard the word "arrogant" lately? How about "jerk?" Or perhaps they only say that behind your back. That's right. I know I can say this cause you're not going to cry. You're not exactly the most emotional person. You'd rather spend time with your theoretical questions and abstract theories than with other people. Ever been kissed? Ever even been on a date? Trust me, your inflated ego is a complete turnoff with the opposite sex and I am telling you, you're not that great with relationships as it is. You're never going to be a dude or chick magnet, purely because you're more concerned with yourself than others. Meh. They all hate you already anyway. How about this- "stubborn?" Hrm? Heard that lately? All those facts which don't fit your theories must just be wrong, right? I mean, really, the vast amounts of time you spend with your head in the clouds...you're just plain strange. -------------------------------------- INTP Talked to another human being lately? I'm serious. You value knowledge above ALL else. You love new ideas, and become very excited over abstractions and theories. The fact that nobody else cares still hasn't become apparent to you... Nerd's a great word to describe you, and I seriously couldn't care less about the different definitions of the word and why you're actually more of a geek than a nerd. Don't pretend you weren't thinking that. You want every single miniscule fact and theory to be presented correctly. Critical? Sarcastic? Cynical? Pessimistic? Just a few words to describe you when you're at your very best...*cough* Sorry, I mean worst. Picking up the dudes or dudettes isn't something you find easy, but don't worry too much about it. You can blame it on your personality type now. On top of all this, you're shy. Nice one. Now, quickly go and delete everything about "theoretical questions" from your profile page. As long as nobody tries to start a conversation with you, just MAYBE you'll now have a chance of picking up a date. But don't get your hopes up. ----------------------------------- ISTJ One word. Boring. Sums you up to a tee. You're responsible, trustworthy, serious and down to earth. Boring. Boring. Boring. You play by the rules. You follow tradition. You encourage structure. You insist that EVERYBODY do EVERYTHING by the book. Seriously, is there even an ounce of imagination in that little brain of yours? I mean, what's the point of imagination, right? It has no practical value... As far as you're concerned, abstract theories can go screw themselves. You just want the facts, all the facts and nothing but the facts. Oh. And you're a perfectionist. About everything. You know that the previous sentence was gramattically incorrect and that "gramattically" was spelt wrong. Your financial records are correct to 25 decimal places and your bedroom is in pristine condition. In fact, you even don't sleep on your bed anymore for fear that you might crease the sheets. Thankfully, you don't have anyone else to share the bed with, because you're uncomfortable expressing affection and emotion to others. Too bad. ---------------- Do you still like personality tests ??????? The best personality tester is a Labrador dog. He loves you and thinks you are wonderful and ignores all your little defects. What more do you want ??? ;) BillK From dharris at livelib.com Tue Jan 5 10:07:08 2010 From: dharris at livelib.com (David C. Harris) Date: Tue, 05 Jan 2010 02:07:08 -0800 Subject: [ExI] effect/affect again. In-Reply-To: <201001050237.o052bcnD029935@andromeda.ziaspace.com> References: <201001050237.o052bcnD029935@andromeda.ziaspace.com> Message-ID: <4B430F4C.5040104@livelib.com> Max More wrote: > >> Actually I must be honest and disqualify myself from this one, for I >> am one who not only knows what is a typewriter, but actually used >> one, in college. I can out-geezer almost everyone here by having used >> the kind (in college!) which does not plug in to the wall. Perhaps >> only Damien can match this, methinks. >> >> spike > > My *dear* fellow. I'll have you know that I wrote my undergraduate > thesis using a typewriter. (Something on eudaimonic egoism..., around > 1986.) I also created three issues of a quite fancy comics fanzine > called Starlight in 1979 and 1980 -- with hand-justified columns and > some experimental, slanted column layouts, entirely using a typewriter > and hand-spacing to achieve justified columns. (You might reply that > the effect was a mere affectation, but it was still an effort > incomparable to anything post-computer.) > > Max Ahhhh, honored Max, geezerdom is not earned by stupendous effort and skill, which you exhibit, but by being OLD! I think I bought my typewriter (elite size character set composed of UGLY san serif letters) around 1963, used it for a few years, and submitted decks of "IBM cards" to a CDC 6600 time shared mainframe around 1965 at UC Berkeley. Now that equipment is making me smile during visits to the Computer History Museum in Mountain View, CA. I claim less talent and more OLD! ;-) If regenerative medicine doesn't save me from permanent death, I hope someone will reanimate me from Alcor's tanks to be a tour guide at the Museum, where I can regale visitors with stories of using a 029 keypunch to make a deck of computer cards with holes punched to allow notching to allow some cards to drop off when a paper clip was inserted. Sounded great, but I didn't have a logical system for more than a nonexclusive OR. When I later encountered Boolean logic I was one motivated student! Oh, and for Spike, a typewriter is a system that takes single character input from a keyboard and immediately outputs it to a printing device, one character at a time, unbuffered, right? - David Harris, Palo Alto, California. From stefano.vaj at gmail.com Tue Jan 5 11:11:58 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 5 Jan 2010 12:11:58 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <85434.91068.qm@web65601.mail.ac4.yahoo.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> Message-ID: <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> 2009/12/30 The Avantguardian : > Well some hints are more obvious than others. ;-) > http://www.hplusmagazine.com/articles/bio/spooky-world-quantum-biology > http://www.ks.uiuc.edu/Research/quantum_biology/ It is not that I do not know the sources, Penrose in the first place. Car engines are also made of molecules, which are made of atoms, and ultimately are the expression of an underlying quantum reality. What I find unpersuasive is the theory that life, however defined, is anything special amongst high-level chemical reactions. >> But, there again, quantum computing fully remains in the field of >> computability, does it not? And the existence of "organic computers" >> implementing such principles would be proof that such computers can be >> built. In fact, I would suspect that "quantum computation", in a >> Wolframian sense", would be all around us, also in other, non-organic, >> systems. > > I have no proof but I suspect that many biological processes are indeed quantum computations. Quantum tunneling of information backwards through time could, for example, explain life's remarkable ability to anticipate things. It may very well be the case that quantum computation is in a sense pervasive, but again I do not see why life, however defined, would be a special case in this respect, since I do not see organic brains exhibiting quantum computation features any more than, say, PCs, and I suspect that "biological anticipations", etc., are more in the nature of "optical artifacts" like the Intelligent Design of organisms. >> There again, the theoretical issue would be simply that of executing a >> program emulating what we execute ourselves closely enough to qualify >> as "human-like" for arbitrary purposes, and find ways to implement it >> in manner not making us await its responses for multiples of the >> duration of the Universe... ;-) > > In order to do so, it would have to consider a superposition of?every possible response and collapse?the ouput?"wavefunction" on the most appropriate response. *If* organic brains actually do some quantum computing. Now, I still have to see any human being solving a typical quantum computing problem with a pencil and a piece of paper... ;-) -- Stefano Vaj From gts_2000 at yahoo.com Tue Jan 5 11:26:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 5 Jan 2010 03:26:49 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <647312.20523.qm@web36504.mail.mud.yahoo.com> --- On Mon, 1/4/10, John Clark wrote: > Well I admit it does a little bit seem like > intelligence to me too, but only a little bit, if a computer > had done the exact same thing rather than a amoeba you would > be screaming that it has nothing to do with intelligence > it's just programing. No, I consider computers intelligent. -gts From stathisp at gmail.com Tue Jan 5 11:32:31 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jan 2010 22:32:31 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <582583.74076.qm@web36504.mail.mud.yahoo.com> References: <582583.74076.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/5 Gordon Swobe : >> But how would we ever distinguish the NCC from something >> else that just had an effect on general neural function? >> If hypoxia causes loss of consciousness, that doesn't mean that >> the NCC is oxygen. > > We know ahead of time that the presence of oxygen will play a critical role. > > Let us say we think neurons in brain region A play the key role in consciousness. If we do not shut off the supply of oxygen but instead shut off the supply of XYZ to region A, and the patient loses consciousness, we then have reason to say that oxygen, XYZ and the neurons in region A play important roles in consciousness. ?We then test many similar hypotheses with many similar experiments until we have a complete working hypothesis to explain the NCC. But you claim that it is possible to make p-neurons which function like normal neurons but, being computerised, lack the NCC, and putting these neurons into region A as replacements will not cause the patient to fall to the ground unconscious. So if you see in your experiments the patient losing consciousness, or any other behavioural change, that must be due to something computable, and therefore not the NCC. The essential function of the NCC is to prevent the patient from being a zombie, and you can never observe this in an experiment. -- Stathis Papaioannou From mbb386 at main.nc.us Tue Jan 5 12:03:37 2010 From: mbb386 at main.nc.us (MB) Date: Tue, 5 Jan 2010 07:03:37 -0500 (EST) Subject: [ExI] effect/affect again. In-Reply-To: References: <392806.41976.qm@web36507.mail.mud.yahoo.com><4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> Message-ID: <36087.12.77.168.194.1262693017.squirrel@www.main.nc.us> > typewriter: you stumped me on that one, never heard of it. > > Actually I must be honest and disqualify myself from this one, for I am one > who not only knows what is a typewriter, but actually used one, in college. > I can out-geezer almost everyone here by having used the kind (in college!) > which does not plug in to the wall. Perhaps only Damien can match this, > methinks. > Methinks you're still young! I learned on such a thing in highschool. It was a Royal, IIRC. It's one reason I *really* like the older clicky IBM keyboards, they've got the right sound, although one need not press so hard on the keys. The typewriter we had at home was an Underwood and it was black with round head keys that had silver metal rims. Very attractive looking machine. Did you have plain caps to go over the keys to teach you to "touch type" rather than look? Regards, MB From mbb386 at main.nc.us Tue Jan 5 12:08:04 2010 From: mbb386 at main.nc.us (MB) Date: Tue, 5 Jan 2010 07:08:04 -0500 (EST) Subject: [ExI] effect/affect again. In-Reply-To: <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> Message-ID: <36093.12.77.168.194.1262693284.squirrel@www.main.nc.us> Rafal writes: > > Now beat this: I have helped my mother wring laundry using a > hand-crank operated mangle attached to the non-automatic washing > machine we had in the early 70's. > When I was a small child I got the tips of my fingers nipped by one of these wringers. Yeouch! I thought it so cool, I'd turn the handle and watch the rollers - and of course I touched them while I was turning and ...... oh well. ;) We had clothesline strung all across the laundry area and the clothes were hung up to dry when the weather was too bad to take them outdoors. Regards, MB From mbb386 at main.nc.us Tue Jan 5 12:09:47 2010 From: mbb386 at main.nc.us (MB) Date: Tue, 5 Jan 2010 07:09:47 -0500 (EST) Subject: [ExI] mangle In-Reply-To: <4B429FA4.2000704@satx.rr.com> References: <392806.41976.qm@web36507.mail.mud.yahoo.com> <4B4230EA.3010605@satx.rr.com> <325F7A7E-777A-4A06-BB0B-D2E4B59C5476@bellsouth.net> <7641ddc61001041755h370e9f4fwc6779561f98fc795@mail.gmail.com> <4B429FA4.2000704@satx.rr.com> Message-ID: <36095.12.77.168.194.1262693387.squirrel@www.main.nc.us> Damien writes: > Ha! I helped my mother haul out washing to the line to dry after she'd > *boiled it in the copper* (as it was called) long before we had a > washing machine. > I remember "the copper" in the laundry cupboard, but it was not used any longer, AFAIK. Regards, MB From gts_2000 at yahoo.com Tue Jan 5 12:10:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 5 Jan 2010 04:10:59 -0800 (PST) Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: Message-ID: <263605.11430.qm@web36507.mail.mud.yahoo.com> --- On Mon, 1/4/10, Aware wrote: > Compare ISTJ (Gordon, presumably, but with high certainty) > with INTJ (Jef), per the Wikipedia descriptions: Actually the last time I took that test, it called me an INTJ too. Most likely you see the influence of analytic philosophy on my intellectual life in recent years. After ignoring the analytics all my life (they looked so darned boring, and who cares about language, logic, meaning and reality anyway?) I finally took the plunge and spent the last year or two reading and surveying thinkers like Frege, Wittgenstein, Russell, Moore and others. And yes Searle also hails from that tradition. The analytics tend to value reality and common sense over lofty and often on close analysis meaningless abstractions. I no longer care to debate how many angels can dance on the head of a pin. You'll have to show me the angels first! :) -gts From jameschoate at austin.rr.com Tue Jan 5 13:27:24 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Tue, 5 Jan 2010 13:27:24 +0000 Subject: [ExI] effect/affect again. In-Reply-To: <4B42D789.2070707@rawbw.com> Message-ID: <20100105132725.9O1SC.183600.root@hrndva-web17-z01> I've been typing since 1968 when I took possession of my mothers K-Mart Lemon Yellow portable. I learned how to actually type after reading an article in the Houston Chronicle Sunday Parade by William F. Buckley Jr. The interviewer asked him what the most important skill he ever learned was and how he learned it. He said typing and he'd. He put a layout of the keyboard on a 3x5 card and taped it to the top of the keyboard. Took him a couple of weeks to learn to touch type. I tried it and it worked. I've suggested it to others and they find it works as well. I still have little Russian keyboards taped to my laptop screen bezel when I have to type Russian and forget where keys are at. It really works well. ---- Lee Corbin wrote: > spike wrote: > > > typewriter: you stumped me on that one, never heard of it. > > > > Actually I must be honest and disqualify myself from this one, for I am one > > who not only knows what is a typewriter, but actually used one, in college. > > I can out-geezer almost everyone here by having used the kind (in college!) > > which does not plug in to the wall. Perhaps only Damien can match this, > > methinks. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From gts_2000 at yahoo.com Tue Jan 5 13:31:30 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 5 Jan 2010 05:31:30 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <200675.1060.qm@web36501.mail.mud.yahoo.com> --- On Tue, 1/5/10, Stathis Papaioannou wrote: > Is there only one type of c-neuron or is it possible to > insert m-neurons which, though they are functionally identical to > b-neurons, result in a different kind of consciousness? I don't know what you mean by "different kind of consciousness". I will say this: if m-neurons cure our man Sam and he then takes LSD, it will affect his conscious experience just as it would for anyone else. > Philosophy has to give an answer that's in accordance with > what would actually happen, what you would actually experience, > otherwise it's worse than useless. The discussion we have been having > is an example of a philosophical problem with profound practical > consequences. If I get a new super-fast computerised brain and you're > right I would be killing myself, whereas if I'm right I would become an > immortal super-human. I think it's important to be sure of the > answer before going ahead! True. On the other hand perhaps you could view it something like Pascal's wager. You have little to lose by believing in digital immortality. If it doesn't work for you then you won't know about it. And you'll never know the truth from talking to the zombies who've tried it. When you ask them, they always say it worked just fine. > You shouldn't dismiss the slippery slope argument so quickly. Either > you suddenly become a zombie when a certain proportion of your neurons > internal workings are computerised or you don't. If you don't, then the > option is that you don't become zombified at all or that you become > zombified in proportion to how much of the neurons is computerised. > Either sudden or gradual zombification seems implausible > to me. Gradual zombification seems plausible to me. In fact we've already discussed this same problem but with a different vocabulary. A week or two ago, I allowed that negligible formal programmification (is that a word?) of real brain processes would result only in negligible loss of intentionality. >> I think that after the initial operation he becomes a >> complete basket-case requiring remedial surgery, and that in >> the end he becomes a philosophical zombie or something very >> close to one. If his surgeon has experience then he becomes >> a zombie or near zombie on day one. > > I don't understand why you say this. Perhaps I haven't > explained what I meant well. The p-neurons are drop-in replacements > for the b-neurons, just like pulling out the LM741 op amps in a > piece of audio equipment and replacing them with TL071's. The TL071 > performs the same function as the 741 and has the same pin-out, so the > equipment will function just the same You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism. If you accept epiphenomenalism and reject the common and in my opinion much more sensible view that experience affects behavior, including neuronal behavior, then we need to discuss that philosophical problem before we can go forward. It looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. By the way this conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he might otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons wherever necessary in his brain until his patient appears ready for life on the streets, zombifying him in the process. > Now, isn't it clear from this that Cram must behave > normally and must (at least) have normal experiences in the parts of > his brain which aren't replaced No, see above. > If Cram has neurons in his language centre replaced then he > must be able to communicate normally and respond to verbal input > normally in every other way: draw a picture, laugh with genuine > amusement at a joke, engage in philosophical debate. He must also > genuinely believe that he understands everything, since if he didn't > he would tell us. No he would not tell us! The surgeon programmed Cram to behave normally and to lie about his subjective experience, all the while believing naively that his efforts counted as cures for the symptoms and side-effects his patient reported. Philosophical zombies have no experience. They know nothing whatsoever, but they lie about it. -gts From gts_2000 at yahoo.com Tue Jan 5 13:54:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 5 Jan 2010 05:54:36 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <670704.9608.qm@web36501.mail.mud.yahoo.com> --- On Tue, 1/5/10, Stathis Papaioannou wrote: > But you claim that it is possible to make p-neurons which > function like normal neurons but, being computerised, lack the NCC, > and putting these neurons into region A as replacements will not cause > the patient to fall to the ground unconscious. No, I make no such claim. Cram's surgeon will no doubt find a way to keep the man walking, even if semantically brain-dead from the effective lobotomization of his Wernicke's and related. -gts From stathisp at gmail.com Tue Jan 5 14:39:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jan 2010 01:39:27 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <670704.9608.qm@web36501.mail.mud.yahoo.com> References: <670704.9608.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/6 Gordon Swobe : > --- On Tue, 1/5/10, Stathis Papaioannou wrote: > >> But you claim that it is possible to make p-neurons which >> function like normal neurons but, being computerised, lack the NCC, >> and putting these neurons into region A as replacements will not cause >> the patient to fall to the ground unconscious. > > No, I make no such claim. Cram's surgeon will no doubt find a way to keep the man walking, even if semantically brain-dead from the effective lobotomization of his Wernicke's and related. Well, Searle makes this claim. He says explicitly that the behaviour of a brain can be simulated by a computer, and invokes Church's thesis in support of this. However, he claims the simulated brain won't have consciousness, and will result in a philosophical zombie. Perhaps there is some confusion because Searle is talking about simulating a whole brain, not a neuron, but if you can make a zombie brain it should certainly be possible to make a zombie neuron. That's what a p-neuron is: it acts just like a b-neuron, the b-neurons around it think it's a b-neuron, but because it's computerised, you claim, it lacks the essentials for consciousness. By definition, if the p-neurons function as advertised they can be swapped for the equivalent b-neuron and the person will behave exactly the same and honestly believe that nothing has changed. If you *don't* believe p-neurons like this are possible then you disagree with Searle. Instead, you believe that there is some aspect of brain physics that is uncomputable, and therefore that weak AI and philosophical zombies may not be possible. This is a logically consistent position, while Searle's is not. However, there is no scientific evidence that the brain uses uncomputable physics. -- Stathis Papaioannou From thespike at satx.rr.com Tue Jan 5 16:06:06 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 10:06:06 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: References: <201001050755.o057tCga000596@andromeda.ziaspace.com> Message-ID: <4B43636E.7020206@satx.rr.com> On 1/5/2010 3:59 AM, BillK wrote: > How about these descriptions? Hey, I'm *all* of them! How can that be? Give me an abstract theory about it, quick! Damien Broderick From jonkc at bellsouth.net Tue Jan 5 16:47:00 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 5 Jan 2010 11:47:00 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <263605.11430.qm@web36507.mail.mud.yahoo.com> References: <263605.11430.qm@web36507.mail.mud.yahoo.com> Message-ID: <9FFFB279-C2B6-448A-BAFC-CA7FDBDED411@bellsouth.net> On Jan 5, 2010 Gordon Swobe wrote: > Gradual zombification seems plausible to me. Yes I know it does, however zombification would not seem plausible to you if you understood Darwin's Theory of Evolution. > I finally took the plunge and spent the last year or two reading and surveying thinkers like Frege, Wittgenstein, Russell, Moore and others. And yes Searle also hails from that tradition. The two greatest philosophical discoveries of the 20'th century were Quantum Mechanics and Godel's Incompleteness Theorem, philosophers did not discover either of them. In fact Wittgenstein probably didn't even read Godel's 1931 paper until 1942 and when he did comment on it, in a article published after his death, he said Godel's paper was just a bunch of tricks of a logical conjurer. He seemed to think that prose could disprove a mathematical proof; even many of Wittgenstein's fans are embarrassed by his last a article. And by the way, the greatest philosophical discovery of the 19'th century was Darwin's Theory of Evolution and that also did not involve philosophers. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jan 5 17:49:36 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 11:49:36 -0600 Subject: [ExI] quantum brains In-Reply-To: <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> Message-ID: <4B437BB0.2020306@satx.rr.com> On 1/5/2010 5:11 AM, Stefano Vaj wrote: >> > In order to do so, it would have to consider a superposition of every possible response and collapse the ouput "wavefunction" on the most appropriate response. > > *If* organic brains actually do some quantum computing. Now, I still > have to see any human being solving a typical quantum computing > problem with a pencil and a piece of paper... ;-) I suppose it's possible that some autistic lightning calculators do that. But I've read arxiv papers recently arguing that photosynthesis functions via entanglement, so something that basic might be operating in other bio systems. And of course since I'm persuaded that some psi phenomena are real, *something* weird as shit is needed to account for them, something that can either do stupendous simulations in multiple worlds/superposed states, or can modify its state according to outcomes in the future. If that's not QM, it's something equally hair-raising that electronic computers aren't built to do. Damien Broderick From max at maxmore.com Tue Jan 5 17:54:01 2010 From: max at maxmore.com (Max More) Date: Tue, 05 Jan 2010 11:54:01 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> BillK wrote: >As Ben mentioned, these personality types come >out sounding like astrology descriptions. (Don't >say anything bad in case you drive the paying >customers away). The fact that dating and >match-making sites use them is another minus point. Many businesses use them also, especially the MBTI. I find this personality typing system interesting and intuitively plausible, but regard it with a low degree of confidence. The comparison to astrological descriptions is not unreasonable, though the MBTI tying seem more specific and predictive. I've read and reviewed a number of books and papers on the topic, but have not yet come across a good test of the MBTI. Here are some of my relevant reviews: Astrology and Alchemy ? The Occult Roots of the MBTI European Business Forum published on 03/01/2004 http://www.manyworlds.com/exploreco.aspx?coid=CO5240417484857 Personality Plus The New Yorker published on 09/20/2004 http://www.manyworlds.com/exploreco.aspx?coid=CO112404141215 The Cult of Personality http://www.manyworlds.com/exploreco.aspx?coid=CO5270512997 Please Understand Me II: Temperament, Character, Intelligence by David Keirsey http://www.manyworlds.com/exploreco.aspx?coid=CO814013273511 Personality Tests: Back With a Vengeance [] by Alison Overholt http://www.manyworlds.com/exploreco.aspx?coid=CO11150412584917 ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From thespike at satx.rr.com Tue Jan 5 18:20:37 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 12:20:37 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> References: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> Message-ID: <4B4382F5.1010207@satx.rr.com> On 1/5/2010 11:54 AM, Max More wrote: > The comparison to astrological descriptions is not unreasonable, though > the MBTI tying seem more specific and predictive. That's not the salient distinction, though, Max. It's that MBTI captures your own assessment of *how you are and how you function*--while astrology bogusly claims to extrapolate all those data from your sun sign and planetary configurations at birth (not even at conception). It's barely possible that babies of a certain genetically-controlled, or developmentally-shaped, constitution will be born during a certain season, or somehow be sensitive to some cosmic condition that triggers hormones that provoke labor at a certain time of night, or plus or minus N days, but that's madly speculative and far too general in any case. The fact that MBTI gets almost everything right about me and my wife is obviously due to the fact that it does capture what we regard as the crucial elements of our attitudes, drives, behavior, and feeds that back to us in a neat summary, together with purportedly empirical information on how people of our type will get on with other kinds of humans. Astrological systems could probably do that too, if you were allowed to browse through the descriptors and choose what "sign" you are, with the actual constellations etc entirely irrelevant (as they almost certainly are, except for the seasonal aspect mentioned above). Damien Broderick From thespike at satx.rr.com Tue Jan 5 18:29:28 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 12:29:28 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <4B4382F5.1010207@satx.rr.com> References: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> <4B4382F5.1010207@satx.rr.com> Message-ID: <4B438508.2000706@satx.rr.com> On 1/5/2010 12:20 PM, I wrote: > Astrological systems could probably do that too, if you were allowed to > browse through the descriptors and choose what "sign" you are, with the > actual constellations etc entirely irrelevant (as they almost certainly > are, except for the seasonal aspect mentioned above). Hmm, so what sun sign is closest to INTJ? I want to adopt it. I'll gladly change my birthday. Most posters here could take the same day, I imagine. What a party! Damien Broderick From max at maxmore.com Tue Jan 5 18:35:51 2010 From: max at maxmore.com (Max More) Date: Tue, 05 Jan 2010 12:35:51 -0600 Subject: [ExI] MBTI, and what a difference a letter makes... Message-ID: <201001051836.o05Ia0YN021706@andromeda.ziaspace.com> Apart from the summary of studies on the Wikipedia page, the following source provides an interesting critique: http://www.bmj.com/cgi/eletters/328/7450/1244#60169 Max From pharos at gmail.com Tue Jan 5 19:43:40 2010 From: pharos at gmail.com (BillK) Date: Tue, 5 Jan 2010 19:43:40 +0000 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <4B43636E.7020206@satx.rr.com> References: <201001050755.o057tCga000596@andromeda.ziaspace.com> <4B43636E.7020206@satx.rr.com> Message-ID: On 1/5/10, Damien Broderick wrote: > Hey, I'm *all* of them! How can that be? Give me an abstract theory about > it, quick! > > I've just returned to my keyboard and reread my email and I feel that I might have to apologize to all our INTJ readers. Really, you're all lovely and the salt of the earth. We couldn't do without you and think you are the greatest thing since sliced bread. The alternative descriptions were just a bit of fun intended to point out possible flaws in the personality analysis system. Damien, the reason you think you're *all* of them might be because you are a *really* strange personality, :) but more likely it is because of the generalized way they are written. Most people have a very wide range of characteristics and behaviors and can see themselves in all of them, some of the time. Even homicidal dictators might like dogs and do painting or design hobbies. It is very difficult to concentrate on being a homicidal maniac *all* of the time. (Or so I find). BillK From stathisp at gmail.com Tue Jan 5 21:25:32 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jan 2010 08:25:32 +1100 Subject: [ExI] quantum brains In-Reply-To: <4B437BB0.2020306@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> Message-ID: <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> On 06/01/2010, at 4:49 AM, Damien Broderick wrote: >> > And of course since I'm persuaded that some psi phenomena are real, > *something* weird as shit is needed to account for them, something > that can either do stupendous simulations in multiple worlds/ > superposed states, or can modify its state according to outcomes in > the future. If that's not QM, it's something equally hair-raising > that electronic computers aren't built to do. That would make mind uploading impossible. It might still be possible to replicate a mind, but it wouln't have all the advantages of software. -- Stathis Papaioannou From jonkc at bellsouth.net Tue Jan 5 21:06:05 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 5 Jan 2010 16:06:05 -0500 Subject: [ExI] quantum brains In-Reply-To: <4B437BB0.2020306@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> Message-ID: On Jan 5, 2010, at 12:49 PM, Damien Broderick wrote: > On 1/5/2010 5:11 AM, Stefano Vaj wrote: >> >> *If* organic brains actually do some quantum computing. Now, I still >> have to see any human being solving a typical quantum computing >> problem with a pencil and a piece of paper... ;-) > > I suppose it's possible that some autistic lightning calculators do that. But even the grandest of these lightning calculators are no match for a conventional non-quantum computer. I just calculated the first 707 digits of PI on my iMAC, it took exactly .000426 seconds, and my iMAC isn't even top of the line anymore, although it was for about 15 minutes. In 1873 the mathematician William Shanks died and had spent the last 20 years of his life doing the exact same calculation. No that isn't correct, it isn't the exact same calculation; my iMac figured the correct numbers but poor Mr. Shanks made an error at digit 527, rendering all further digits and the final 5 years of his life worthless. Fortunately he never learned of his error, nobody did till 1958 when a computer spotted it. it takes my machine .000021 seconds to calculate 527 digits of PI. > I'm persuaded that some psi phenomena are real I know this will greatly surprise you but I don't entirely agree with you about that. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jan 5 22:10:00 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jan 2010 16:10:00 -0600 Subject: [ExI] quantum brains In-Reply-To: <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> Message-ID: <4B43B8B8.3030202@satx.rr.com> On 1/5/2010 3:25 PM, Stathis Papaioannou wrote: > That would make mind uploading impossible. It might still be possible to > replicate a mind, but it wouln't have all the advantages of software. Yes, it's a disheartening thought. Unless minds are already being copied on a time-sharing entanglement basis through whatever medium psi operates in--which opens the way to a sort of version of (QT-instantiated) souls, maybe, causing John Clark to give up on me finally as a hopeless lost cause. Norman Spinrad wrote a short novel about uploading, DEUS X (1993), in which to my amazement he took the line that it's impossible and bad for your health and just creates zombie cartoon replicas. That's sf writers for ya--we'll consider *anything*... Damien Broderick From gts_2000 at yahoo.com Wed Jan 6 12:59:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 6 Jan 2010 04:59:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <499450.65413.qm@web36505.mail.mud.yahoo.com> --- On Tue, 1/5/10, Stathis Papaioannou wrote: >> No, I make no such claim. Cram's surgeon will no doubt >> find a way to keep the man walking, even if semantically >> brain-dead from the effective lobotomization of his >> Wernicke's and related. > > Well, Searle makes this claim. I don't think Searle ever considered a thought experiment exactly like the one we created here. In any case, in this experiment, I simply deny your claim that my position entails that the surgeon cannot keep the man walking. The surgeon starts with a patient with a semantic deficit caused by a brain lesion in Wernicke's area. He replaces those damaged b-neurons with p-neurons believing just as you do that they will behave and function in every respect exactly as would have the healthy b-neurons that once existed there. However on my account of p-neurons, they do not resolve the patient's symptoms and so the surgeon goes back in to attempt more cures, only creating more semantic issues for the patient. The surgeon keeps patching the software so to speak until finally the patient does speak and behave normally, not realizing that each patch only further compromised his patient's intentionality. In the end he succeeds in creating a patient who reports normal experiences and passes the Turing test, oblivious to the fact that the patient also now has little or no experience of understanding words assuming he has any experience at all. -gts > Perhaps > there is some confusion because Searle is talking about > simulating a > whole brain, not a neuron, but if you can make a zombie > brain it > should certainly be possible to make a zombie neuron. > That's what a > p-neuron is: it acts just like a b-neuron, the b-neurons > around it > think it's a b-neuron, but because it's computerised, you > claim, it > lacks the essentials for consciousness. By definition, if > the > p-neurons function as advertised they can be swapped for > the > equivalent b-neuron and the person will behave exactly the > same and > honestly believe that nothing has changed. > > If you *don't* believe p-neurons like this are possible > then you > disagree with Searle. Instead, you believe that there is > some aspect > of brain physics that is uncomputable, and therefore that > weak AI and > philosophical zombies may not be possible. This is a > logically > consistent position, while Searle's is not. However, there > is no > scientific evidence that the brain uses uncomputable > physics. > > > -- > Stathis Papaioannou > From gts_2000 at yahoo.com Wed Jan 6 14:28:18 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 6 Jan 2010 06:28:18 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: <4B42D3AC.6090504@rawbw.com> Message-ID: <191400.54764.qm@web36501.mail.mud.yahoo.com> Jef, > Argh,"turtles all the way down", indeed.?Then must nature also compute > the infinite expansion of the digits of pi for every soap bubble as well? Your question assumes that nature actually performs soap bubble computations somewhere as if on some Divine Universal Turing Machine. I don't think we have any good reason to believe so. -gts From stathisp at gmail.com Wed Jan 6 14:32:05 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jan 2010 01:32:05 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <499450.65413.qm@web36505.mail.mud.yahoo.com> References: <499450.65413.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/6 Gordon Swobe : > --- On Tue, 1/5/10, Stathis Papaioannou wrote: > >>> No, I make no such claim. Cram's surgeon will no doubt >>> find a way to keep the man walking, even if semantically >>> brain-dead from the effective lobotomization of his >>> Wernicke's and related. >> >> Well, Searle makes this claim. > > I don't think Searle ever considered a thought experiment exactly like the one we created here. He did, and I finally found the reference. It was in his 1992 book, "The Rediscovery of the Mind", pp 66-67. Here is a quote: <...as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when the doctors test your vision, you hear them say, "We are holding up a red object in front of you; please tell us what you see." You want to cry out, "I can't see anything. I'm going totally blind." But you hear your voice saying in a way that is completely out of your control, "I see a red object in front of me." If we carry the thought-experiment out to the limit, we get a much more depressing result than last time. We imagine that your conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same.> He is discussing here the replacement of neurons in the visual cortex with functionally identical computer chips. He agrees that it is possible to make functionally identical computerised neurons because he accepts that physics is computable. He agrees that these p-neurons will interact normally with the remaining b-neurons because they are, by definition, functionally identical. He agrees that the behaviour of the whole brain will continue as per normal because this also follows necessarily if the p-neurons and remaining b-neurons behave normally. However, he believes that consciousness will become decoupled from behaviour: the patient will become blind, will realise he is blind and try to cry out, but he will hear himself saying that everything is normal and will be powerless to do anything about it. That would only be possible if the patient is doing his thinking with something other than his brain, although it doesn't seem that Searle realised this, since he has always claimed that thinking is done with the brain and there is no immaterial soul. > In any case, in this experiment, I simply deny your claim that my position entails that the surgeon cannot keep the man walking. > > The surgeon starts with a patient with a semantic deficit caused by a brain lesion in Wernicke's area. He replaces those damaged b-neurons with p-neurons believing just as you do that they will behave and function in every respect exactly as would have the healthy b-neurons that once existed there. However on my account of p-neurons, they do not resolve the patient's symptoms and so the surgeon goes back in to attempt more cures, only creating more semantic issues for the patient. Can you explain why you think the p-neurons won't be functionally identical? It seems that you do believe (unlike Searle) that there is something about neuronal behaviour that is not computable, otherwise there would be nothing preventing the creation of p-neurons that are drop-in replacements for b-neurons, guaranteed to leave behaviour unchanged. As I have said before, this is a logically consistent position; it would mean p-neurons, weak AI, the Chinese Room and philosophical zombies might all be impossible. It is a scientific rather than a philosophical question whether the brain utilises uncomputable physics, and the standard scientific position is that it doesn't. -- Stathis Papaioannou From aware at awareresearch.com Wed Jan 6 16:11:42 2010 From: aware at awareresearch.com (Aware) Date: Wed, 6 Jan 2010 08:11:42 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <191400.54764.qm@web36501.mail.mud.yahoo.com> References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Wed, Jan 6, 2010 at 6:28 AM, Gordon Swobe wrote: > Jef, > >> Argh,"turtles all the way down", indeed.?Then must nature also compute >> the infinite expansion of the digits of pi for every soap bubble as well? > > Your question assumes that nature actually performs soap bubble computations somewhere as if on some Divine Universal Turing Machine. I don't think we have any good reason to believe so. I don't make that assumption. I was offering it as a reductio ad absurdum applicable (I think) to your insistence that "consciousness" is an intrinsic property of some brains, but that it is absent at any particular level of description. I have to admit I've lost track of all the phases of your argument since you and Stathis have gone around and around so many times, and the whole thing tends to evaporate in my mind since the problem, as formulated, can't be modeled (It can't be coherently stated.) As I've said already (three times in this thread) it seems that everyone here (and Searle) would agree with the functionalist position: that perfect copies must be identical, and thus functionalism needs no defense. Stathis continues to argue on the basis of functional identity, since he doesn't seem to see how there could be anything more to the question. [I know Stathis had a copy of Hofstadter's _I AM A STRANGE LOOP_, but I suspect he didn't finish it.] John Clark continues to argue on the more abstract basis that evolutionary processes don't produce intelligence without consciousness, which, in my opinion is flawed, since one can point to examples of evolved "intelligence"--organisms acting with appropriate prediction and control--yet lacking that extra evolutionary layer providing awareness and thus modeling of "self", but when pinned down John appears to go to either limit: Mr. Jupiter Brain wouldn't be very smart if he didn't model himself, or the other (panpsychist) view that even an amoeba has consciousness, but just an eensy teensy bit. And you continue to argue along the lines of Searle that since we KNOW (from indisputable 1st-person evidence) that conscious experience (including qualia, meaning, intentionality) EXIST, and since we are hard-core functionalists and materialists and can see upon close inspection (however close we might care to look) that there is no place within the formally described system in which such qualia/meaning/intentionality are produced, then there MUST be some extra ingredient, essential to consciousness, of which we are yet unaware. And I've already offered that, despite the seductively strong intuition, reinforced by our nature, language and culture, that these phenomena of qualia/meaning/intensionality are real, undeniable, intrinsic properties of at least certain organisms including most human beings, that there is actually no need for any mysterious extra ingredient. The "mysterious" phenomena are adequately and parsimoniously explained in terms of the (recursive) relationship of the observer to the observed. Of course "we" refer to "ourselves" in this way. So in a sense, the panpsychists got it pretty close, except inside-out and with the assumption of an ontological "consciousness" that isn't necessary. Actually NOTHING has this assumed essential conscious, but EVERYTHING expresses self-awareness, and will necessarily report 1st-person experience, to the extent that its functional nature implements a reflective model of itself. What more is there to say? - Jef From jonkc at bellsouth.net Wed Jan 6 17:32:24 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 6 Jan 2010 12:32:24 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> On Jan 6, 2010, Aware wrote: > John Clark continues to argue on the more abstract basis that > evolutionary processes don't produce intelligence without > consciousness, which, in my opinion is flawed, since one can point to > examples of evolved "intelligence"--organisms acting with appropriate > prediction and control--yet lacking that extra evolutionary layer > providing awareness and thus modeling of "self" The trouble with all these discussions is that people point to things and say, look at that (computer made of beer cans, Chinese room, ameba, or whatever) and say that's intelligent but *OBVIOUSLY* it's not conscious; but it is not obvious at all and in fact they have absolutely no way of knowing it is true. If you show me something and call it "intelligent" then I can immediately call it conscious and don't even need to express reservations on the use of the word with quotation marks as you did because we learned from the history of Evolution that consciousness is easy but intelligence is hard. > when pinned down John appears to go to either limit: Mr. Jupiter Brain wouldn't be > very smart if he didn't model himself Yes. > or the other (panpsychist) view that even an amoeba has consciousness, but just an eensy teensy bit. If an amoeba is a eensy bit intelligent then it's two eensy bits conscious. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Wed Jan 6 18:20:30 2010 From: x at extropica.org (x at extropica.org) Date: Wed, 6 Jan 2010 10:20:30 -0800 Subject: [ExI] Some new angle about AI. In-Reply-To: <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> Message-ID: 2010/1/6 John Clark : > On Jan 6, 2010, Aware wrote: > The trouble with all these discussions is that people point to things and > say, look at that (computer made of beer cans, Chinese room, ameba, or > whatever) and say that's intelligent but *OBVIOUSLY* ?it's not conscious; > but it is not obvious at all and in fact they have absolutely no way of > knowing it is true. I agree that consciousness (self-awareness) is not obvious, and can only be inferred. By definition. It seems to me that you're routinely conflating "intelligence" and "consciousness", but then, oddly, you distinguish between them by saying one is much easier than the other. I AGREE that in terms of the evolutionary process that lead to the emergence of intelligence and then consciousness (self-awareness) on this planet, that the evolution of "intelligence" was a much bigger step, requiring a lot more time, than the evolution of consciousness, which is like just an additional layer of supervision. > If you show me something and call it "intelligent" then > I can immediately call it conscious and don't even need to express > reservations on the use of the word with quotation marks as you did because > we learned from the history of Evolution that consciousness is easy but > intelligence is hard. So why don't you agree with me that intelligence must have "existed" (been recognizable, if there had been an observer) for quite a long time before evolutionary processes stumbled upon the additional, supervisory, hack of self-awareness? >> when pinned down?John appears to go to either limit: ?Mr. Jupiter Brain >> wouldn't be very smart if he didn't model himself > > Yes. > >> or the other (panpsychist) view?that even an amoeba has consciousness, but >> just an eensy teensy bit. > > If an amoeba is a eensy bit intelligent then it's two eensy bits conscious. > ?John K Clark It doesn't (seem, to me) to follow at all that if an amoeba can be said to be intelligent (displays behaviors of effective prediction and control appropriate to its environment of adaptation) that it can necessarily be said to be conscious (exploits awareness of its own states and actions.) That seems to me to be an additional layer of supervisory functionality that isn't implemented in the relatively simple structure of the amoeba. You're asserting a continuous QUANTITATIVE scale of consciousness, from the amoeba (and presumably below) up to Mr. Jupiter Brain (and presumably beyond.) I'm asserting ongoing, punctuated, QUALITATIVE developments, with novel hacks like self-awareness discovered at some point, exploited for the additional fitness they confer, and eventually superseded by even newer hacks providing greater benefits over greater scope of interaction. I fully expect that self-awareness will eventually be superseded by a fractal form of hierarchical awareness. - Jef From scerir at libero.it Wed Jan 6 18:20:41 2010 From: scerir at libero.it (scerir) Date: Wed, 6 Jan 2010 19:20:41 +0100 Subject: [ExI] quantum brains In-Reply-To: <4B43B8B8.3030202@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com><1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> <4B43B8B8.3030202@satx.rr.com> Message-ID: <69DBD12AC7674C60A97A6F9152851B90@PCserafino> Damien: And of course since I'm persuaded that some psi phenomena are real, *something* weird as shit is needed to account for them, something that can either do stupendous simulations in multiple worlds/superposed states, or can modify its state according to outcomes in the future. If that's not QM, it's something equally hair-raising that electronic computers aren't built to do. # But what is the quantum? J.Wheeler said there was just a "Merlin principle" (named after the legendary magician who, when pursued, changed his form again and again). That is to say: the more we pursue the quantum, the more it changes. Here below a short list of changing, evolving concepts, rules, topics, problems. Discreteness, indeterminism, probabilities, uncertainty relations, entropic uncertainty relations, non-definiteness of values before measurements, no (in general) retrodiction, essential randomness, incompressibile randomness and undecidability, a-causality, contextuality, real/complex/quaternionic formalisms, Hilbert spaces or logical structures representation, correspondence principle, complementarity, duality, smooth transitions, quanta as carriers of limited information, second order complementarity, superpositions, entanglements, conditional entropies can be negative, algebraic non-separability, geometric non-locality, local hidden variables, non-local hidden variables, non-local hidden variables plus time arrow assumption, a-temporality, conspiracy theories, which one: free will or space-time?, time (in general) is not an observable, quantum interferences, Feynman rules, indistinguishability, erasure of indistinguishability, second order interferences, quantum dips, quantum beats, interferences in time, fractal revivals, ghost imaging, from potentiality to actuality via measurements, objective reduction of wave-packet, subjective reduction of wave-packet, pre-measurements, weak measurements, interaction-free measurements, two-time symmetric quantum theory, no-cloning principle, no-deleting principle, no-signaling principle (relativistic causality), are there negative probabilities?, de-coherence, sum-over-paths, beables, many-worlds, many-and-consistent-histories, the transactional, and so on, and on, and on. Is there also a "superquantum" domain? There is for sure since Sandu Popescu and Daniel Rohrich wrote 'Quantum Nonlocality as an Axiom' (in Foundations of Physics, Vol. 24, No. 3, 1994). Essentially it is the domain of superquantum correlations, stronger than the usual quantum correlations. As we all know John Bell Bell proved that quantum entanglement enables two space-like separated parties to exhibit classically impossible correlations. Even though these correlations are stronger than anything classically achievable, they cannot be harnessed to make instantaneous (faster than light) communication possible. Yet, Popescu and Rohrlich have shown that even stronger correlations can be defined, under which instantaneous communication remains *impossible* (relativistic causality is safe). This raises the question: Why are the correlations achievable by quantum mechanics not maximal among those that preserve relativistic causality? There are no good answers to this question. But it is possible to show that superquantum correlations would result in a world in which the so called 'communication complexity' becomes 'trivial' [1] but 'magic' [2] [3]. So, good news for the SF writers. Or it seems so. s. [1] Assume Alice and Bob wish to compute some Boolean function f(x, y) of input x, known to Alice only, and input y, known to Bob only. Their concern is to minimize the amount of (classical) communication required between them for Alice to learn the answer. It is clear that this task cannot be accomplished without at least some communication (even if Alice and Bob share prior entanglement), unless f(x, y) does not actually depend on y, because otherwise instantaneous signalling would be possible. Thus, we say that the communication complexity of f is 'trivial' if the problem can be solved with a single bit of communication (a single bit of communication also protects relativistic causality). [2] A nonlocal box is an imaginary device that has an input-output port at Alice's and another one at Bob's, even though Alice and Bob can be space-like separated. Whenever Alice feeds a bit x into her input port, she gets a uniformly distributed random output bit a, locally uncorrelated with anything else, including her own input bit. The same applies to Bob, whose input and output bits we call y and b, respectively. The "magic" appears in the form of a correlation between the pair of outputs and the pair of inputs. Much like the correlations that can be established by use of quantum entanglement. This device (nonlocal box, also named PR box) is a-temporal. Alice gets her output as soon as she feeds in her input, regardless of if and when Bob feeds in his input, and vice versa. Also inspired by entanglement, this is a one-shot device. The correlation appears only as a result of the first pair of inputs fed in by Alice and Bob, respectively. [3] There is some literature, in example: http://arxiv.org/abs/0907.3584 http://arxiv.org/abs/quant-ph/0501159 From sjatkins at mac.com Wed Jan 6 18:45:37 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 06 Jan 2010 10:45:37 -0800 Subject: [ExI] atheism In-Reply-To: <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> Message-ID: <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> On Dec 28, 2009, at 5:03 AM, Stefano Vaj wrote: > 2009/12/28 Samantha Atkins > There is ample evidence that belief regardless of evidence or argument is harmful. > > Mmhhh. I would qualify that as an opinion of a moral duty to swear on the truth of unproved or disproved facts. > > This has something to do with the theist objection that positively believing that Allah does not "exist" would be a "faith" on an equally basis as their own. That is poor reasoning. It is not a "positive" believe at all. It is not even a belief at all. It is not believing in a positive belief for which there is no evidence. Now, if the formulation of "Allah" is actually contradictory then we can go to a stronger logical position of pointing out that such is impossible. > > Now, I may well be persuaded that my cat is sleeping in the other room even though no final evidence of the truth of my opinion thereupon is (still) there, and to form thousand of such provisional or ungrounded - and often wrong - beliefs is probably inevitable. But would I claim that such circumstances are a philosophical necessity or of ethical relevance? Obviously not... Poor analogy. We know that cats exist and that states such as sleeping exist. We know no such things about gods or that putative states. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Wed Jan 6 18:53:34 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 06 Jan 2010 10:53:34 -0800 Subject: [ExI] MBTI, and what a difference a letter makes... In-Reply-To: <4B438508.2000706@satx.rr.com> References: <201001051754.o05HsAfw000445@andromeda.ziaspace.com> <4B4382F5.1010207@satx.rr.com> <4B438508.2000706@satx.rr.com> Message-ID: On Jan 5, 2010, at 10:29 AM, Damien Broderick wrote: > On 1/5/2010 12:20 PM, I wrote: > >> Astrological systems could probably do that too, if you were allowed to >> browse through the descriptors and choose what "sign" you are, with the >> actual constellations etc entirely irrelevant (as they almost certainly >> are, except for the seasonal aspect mentioned above). > > Hmm, so what sun sign is closest to INTJ? I want to adopt it. I'll gladly change my birthday. Most posters here could take the same day, I imagine. What a party! > Well, any system of a sufficient number of variables can be mapped onto any other system reasonably described by that number or less variables. Since humans are notoriously limited in the number of variables they can simultaneously consider and since our minds by design jump to find patterns, even where there aren't any, it is easy to see how these systems arise and perpetuate themselves. They usually fall apart at the level of claiming they are predictive although even there humans are so muddle headed they will try to fit actual experience to previous now contradicted prediction. - samantha From sjatkins at mac.com Wed Jan 6 18:56:56 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 06 Jan 2010 10:56:56 -0800 Subject: [ExI] quantum brains In-Reply-To: <4B43B8B8.3030202@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> <4B43B8B8.3030202@satx.rr.com> Message-ID: <85365026-7F7B-4897-B4C9-B803D198A64E@mac.com> On Jan 5, 2010, at 2:10 PM, Damien Broderick wrote: > On 1/5/2010 3:25 PM, Stathis Papaioannou wrote: > >> That would make mind uploading impossible. It might still be possible to >> replicate a mind, but it wouln't have all the advantages of software. > > Yes, it's a disheartening thought. Unless minds are already being copied on a time-sharing entanglement basis through whatever medium psi operates in--which opens the way to a sort of version of (QT-instantiated) souls, maybe, causing John Clark to give up on me finally as a hopeless lost cause. Nature came up with mysterious ju-ju X but humans, a product of ju-ju X can't build anything else also incorporating X. Yeah, right. - s From thespike at satx.rr.com Wed Jan 6 20:10:01 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 06 Jan 2010 14:10:01 -0600 Subject: [ExI] quantum brains In-Reply-To: <85365026-7F7B-4897-B4C9-B803D198A64E@mac.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> <1CD984E2-C89F-4F72-8691-D1C325387B9B@gmail.com> <4B43B8B8.3030202@satx.rr.com> <85365026-7F7B-4897-B4C9-B803D198A64E@mac.com> Message-ID: <4B44EE19.9060109@satx.rr.com> On 1/6/2010 12:56 PM, Samantha Atkins wrote: >>> >> That would make mind uploading impossible. It might still be possible to >>> >> replicate a mind, but it wouln't have all the advantages of software. >> > >> > Yes, it's a disheartening thought. Unless minds are already being copied on a time-sharing entanglement basis through whatever medium psi operates in--which opens the way to a sort of version of (QT-instantiated) souls, maybe > > Nature came up with mysterious ju-ju X but humans, a product of ju-ju X can't build anything else also incorporating X. Yeah, right. That's not the argument. If natural selection stumbled on some sort of entanglement thingee that subserves consciousness and perhaps psi, it is quite plausible that making a mechanical brain out of beer cans and toilet paper or integrated circuits *just isn't using the right kind of stuff to instantiate a conscious mind.* Sure, we could reverse-engineer the process using the right kind of stuff (bioengineered up, maybe, or adiabatic quantum computers, or something as yet undreamed of) but it would still mean that linear electronic computers are a dead end *for consciousness* even if they are wizardly with computations and can beat your pants off at chess or Go or driving a racing car up Mount Everest. Damien Broderick From stefano.vaj at gmail.com Wed Jan 6 20:26:47 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 6 Jan 2010 21:26:47 +0100 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> <580930c21001011420t7d5c035eh29eeb3c396f5e6c7@mail.gmail.com> Message-ID: <580930c21001061226j46722e1eo43f7dbe839395cc8@mail.gmail.com> 2010/1/2 Stathis Papaioannou : > But organic brains do better than computers at the highest level of > mathematical creativity. The "highest level of mathematical creativity" is however only too often anthropomorphically defined as what is difficult to replicate at a given historical moment. Once upon a time, idiots savants doing several digit arithmetics appeared the top of "intelligence". Then chess became the paradigm of human rational thought. Then calculus. Then... I think that Wolphram's A New Kind of Science contains important pieces of insight in this respect. However, what we know organic brains being *very* bad at doing is quantum computation... -- Stefano Vaj From stefano.vaj at gmail.com Wed Jan 6 20:39:10 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 6 Jan 2010 21:39:10 +0100 Subject: [ExI] atheism In-Reply-To: <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> Message-ID: <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> 2010/1/6 Samantha Atkins : > On Dec 28, 2009, at 5:03 AM, Stefano Vaj wrote: >> This has something to do with the theist objection that positively believing >> that Allah does not "exist" would be a "faith" on an equally basis as their >> own. > > That is poor reasoning. ?It is not a "positive" believe at all. ?It is not > even a belief at all. ?It is not believing in a positive belief for which > there is no evidence. ? Now, if the formulation of "Allah" is actually > contradictory then we can go to a stronger logical position of pointing out > that such is impossible. This is exactly my point. A belief in the non-existence of Allah is perfectly plausible, and is on an entirely different level from a belief in its existence i) because it is perfectly normal and legitimate not just avoiding to believe in the existence of unproven things, but also actually believing in their non-existence; ii) in addition, Spiderman, Thor or Sherlock Holmes may (have) exist(ed) somewhere, someplace, Allah, Jahv? etc. have some peculiar existential and definitory problems which affect any entity allegedly being distinct from the world, and located out of the time... >> Now, I may well be persuaded that my cat is sleeping in the other room even >> though no final evidence of the truth of my opinion thereupon is (still) >> there, and to form thousand of such provisional or ungrounded - and often >> wrong - beliefs is probably inevitable. But would I claim that such >> circumstances are a philosophical necessity or of ethical relevance? >> Obviously not... > > Poor analogy. ?We know that cats exist and that states such as sleeping > exist. ?We know no such things about gods or that putative states. What I mean there is that while it is perfectly normal in everyday life to believe things without any material evidence thereof (the existence of cats and sleep does not tell me anything about the current state of my cat any more than the existence of number 27 on the roulette does not provide any ground for my belief that this is the number which is going to win, and therefore on which I should bet, at the next throw of the ball), what is abnormal is to claim that such assumptions are a philosophical necessity or of ethical relevance. -- Stefano Vaj From stefano.vaj at gmail.com Wed Jan 6 20:48:57 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 6 Jan 2010 21:48:57 +0100 Subject: [ExI] quantum brains In-Reply-To: <4B437BB0.2020306@satx.rr.com> References: <85434.91068.qm@web65601.mail.ac4.yahoo.com> <580930c21001050311hd90de7bqcd799e983e0f6a32@mail.gmail.com> <4B437BB0.2020306@satx.rr.com> Message-ID: <580930c21001061248l414ae514h9eb3330cff2d0636@mail.gmail.com> 2010/1/5 Damien Broderick : > I suppose it's possible that some autistic lightning calculators do that. > But I've read arxiv papers recently arguing that photosynthesis functions > via entanglement, so something that basic might be operating in other bio > systems. > > And of course since I'm persuaded that some psi phenomena are real, > *something* weird as shit is needed to account for them, something that can > either do stupendous simulations in multiple worlds/superposed states, or > can modify its state according to outcomes in the future. If that's not QM, > it's something equally hair-raising that electronic computers aren't built > to do. Fine. Perhaps some quantum phenomenon is relevant in, say, the operations of the liver or the photosynthesis. What would suggest that it is involved as well in, say, the healing of a wound or the computation performed by organic brains (which happens *not* to exhibit any of the features of a quantum computer)? As you know, I am also inclined to believe that the evidence is more on the side of the existence of some kind of psi phenomena rather than not, even more after reading your book on the subject ;-), but there again, if it had anything with quantum features of the brain, would it rreally be a defining feature of "intelligence"? Most human beings have today access to TV broadcasting, but I think a real human being could easily pass a Turing test even though cut out from the networks. Would lack of access to such very elusive, occasional and peripheral phenomena disqualify an AGI or an uploaded human being to be perceived any differently from his or her neighbours? -- Stefano Vaj From steinberg.will at gmail.com Wed Jan 6 21:15:24 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 6 Jan 2010 16:15:24 -0500 Subject: [ExI] Psi (but read it before you don't read it) Message-ID: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> I saw that Damien was talking about psi. I don't know what most of you think about it. It is good to have a crowd that at least has an opinion on it one way or the other though. When you try to boil down what people consider psionics, it is easy to draw a line between the completely ridiculous and the somewhat ridiculous. It is hard to back up a group of people that often espouses things like pyrokinesis, so I will say here that I would only give merit to a few of the ideas; namely, telepathy, empathy, remote viewing, and precognition. What distinguishes these from the rest is that they can be completely described in terms of knowing rather than doing. Telekinesis and the like rely on the user acting, while our "soft" psi is an act of observation. Acting across space to move an object might be absurd, but given causality and maybe entanglement, knowledge is only limited by computational power. It's not incredibly difficult to imagine a causality analysis system based on observations around us. Think about it like an implicit, extended anthropic principle: If *now* is like it is, the universe must be like it is. This would allow us to communicate telepathically not by *sending* a message, but instead by *knowing*, given the surroundings, what message you will receive. Empathy, remote viewing, and precognition work the same way, using accessible data to predict inaccessible data. The biggest problems are obviously the difficulty of synthesizing this information into coherent ideas and the "causal distance," or relative triviality, of observed events with regards to the topic at hand. Why should the spin of molecules in the air, the position of the stars at night, the precise feel of gravity, give us any indication as to a completely unrelated circumstance? It would follow from this that events that are causally close to you (that are linked to you by fewer steps backwards and forwards in time, generally having closer x, y, z, t) are more easily predictable than events that are causally distant. It's easy to see that this hold true for extreme circumstances--it is easy to know when I will pick up my fork to eat my next bite of dinner, not so easy when trying to guess the weather on Venus. The middle ground is harder to justify. It seems that predicting earthly events can be as hard, if not harder, than predicting otherworldly events. But these all rely on observation. The reason it is as impossible to guess the weather in Tulsa as on Venus (or anywhere) is that the system is very independent of any actions we make. Since any informational i/o will be ridiculously garbled by a chaotic system, this will be difficult anywhere. Most things are chaotic and unrelated to us. It follows that psi cannot operate on whim or on a desired object (ask many who believe they experience these things and they will tell you it happens to them rather than their causing it); it, should it exist, is carefully limited and allotted based on what is closest and with the least amount of informational decay. Perhaps some events and ideas manage to escape being broken apart and are instead retained as material information. Or, rather, perhaps some material sets of information diverge into paths sometime in the past, happening to exist in more than one locus later in time and thus be accessible by multiple, separated people. This happens today. We can understand the possible composition of unobservable parts of the universe based on mutated information from backwards in time like CMBR or spectral analysis. These are all based on causal distance. If we receive a wave that contains a lot of information and thus is helpful for understanding, it must have interacted with less than more garbled waves, which means it *does* less before we observe it. A wave that takes its sweet causal time to get to us might not even be a wave anymore; it could be the heat I feel coming from my laptop. When something is garbled, we have to work harder to understand how it is related. The picture of the heat of the universe is expressed very directly, but if you try to deduce the fact based on element levels in a rock sample, we have to make many, many more syllogisms, through the anthropic principle, geology, physics and chemistry, before we get to the end result. So--human intuition and experimentation is a means of reversing, through math, the transformations that time and being have effected on objects we want to understand. By taking the slow route, we end up learning more about the laws of the universe, because those laws are manifested in the physical interactions that we have to follow back in time. The "psionic" approach is quicker but would seem to skip a lot of the good stuff, which also leads to a lot of problems with proof and acceptance. We all know that the brain is mathematically capable of more than one is consciously allowed, to an incredible extent. It is in the best mind to humor the idea, if only for as long enough as a sensible discussion allows. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanite1018 at gmail.com Wed Jan 6 21:31:55 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 6 Jan 2010 16:31:55 -0500 Subject: [ExI] Psi (but read it before you don't read it) In-Reply-To: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> Message-ID: <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> > I saw that Damien was talking about psi. I don't know what most of > you think about it. It is good to have a crowd that at least has an > opinion on it one way or the other though. I reject all types of this stuff because there is no conceivable way in which they could operate. The information you would have to analyze in order to gain any information about things in the future, or in other regions, or that is actually happening in someone's mind would require a sensitivity and processing capacity so far beyond what our minds are capable of it is astonishing. Our senses are very limited in terms of their acuity. Since I reject the very concept of an extra-body soul as meaningless and without ground in empirical evidence, I can see no way that any of this stuff could actually be real. Telepathy or empathy are likely a result of a Sherlock Holmesian attention to detail and an extremely good sense of body language analysis, etc. Other things, like precognition, are meaningless and are certainly the result of a combination of practical psychology (predicting what people will do based on knowledge about them), and of course luck and chance. On a related note, the metaphysical studies (i.e. astrology, psychic, occult, new age; alternatively called "crap") section of my local Borders is now approximately equal in size to the philosophy (which is no longer labeled on any of the signs) and the science sections combined. I find that sad. Joshua Job nanite1018 at gmail.com From reasonerkevin at yahoo.com Thu Jan 7 00:59:36 2010 From: reasonerkevin at yahoo.com (Kevin Freels) Date: Wed, 6 Jan 2010 16:59:36 -0800 (PST) Subject: [ExI] atheism In-Reply-To: <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> Message-ID: <504420.54703.qm@web81603.mail.mud.yahoo.com> ________________________________ From: Stefano Vaj To: ExI chat list Sent: Wed, January 6, 2010 2:39:10 PM Subject: Re: [ExI] atheism 2010/1/6 Samantha Atkins : > On Dec 28, 2009, at 5:03 AM, Stefano Vaj wrote: >> This has something to do with the theist objection that positively believing >> that Allah does not "exist" would be a "faith" on an equally basis as their >> own. > > That is poor reasoning. It is not a "positive" believe at all. It is not > even a belief at all. It is not believing in a positive belief for which > there is no evidence. Now, if the formulation of "Allah" is actually > contradictory then we can go to a stronger logical position of pointing out > that such is impossible. This is exactly my point. A belief in the non-existence of Allah is perfectly plausible, and is on an entirely different level from a belief in its existence i) because it is perfectly normal and legitimate not just avoiding to believe in the existence of unproven things, but also actually believing in their non-existence; ii) in addition, Spiderman, Thor or Sherlock Holmes may (have) exist(ed) somewhere, someplace, Allah, Jahv? etc. have some peculiar existential and definitory problems which affect any entity allegedly being distinct from the world, and located out of the time... >> Now, I may well be persuaded that my cat is sleeping in the other room even >> though no final evidence of the truth of my opinion thereupon is (still) >> there, and to form thousand of such provisional or ungrounded - and often >> wrong - beliefs is probably inevitable. But would I claim that such >> circumstances are a philosophical necessity or of ethical relevance? >> Obviously not... > > Poor analogy. We know that cats exist and that states such as sleeping > exist. We know no such things about gods or that putative states. What I mean there is that while it is perfectly normal in everyday life to believe things without any material evidence thereof (the existence of cats and sleep does not tell me anything about the current state of my cat any more than the existence of number 27 on the roulette does not provide any ground for my belief that this is the number which is going to win, and therefore on which I should bet, at the next throw of the ball), what is abnormal is to claim that such assumptions are a philosophical necessity or of ethical relevance. -- Stefano Vaj _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat It is quite different to say "I am convinced there is no God" than it is to say "I am not convinced there is a God" There is no evidence disproving the existence of God so to believe there is no god is indeed a faith in itself. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Jan 7 01:48:35 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 7 Jan 2010 12:18:35 +1030 Subject: [ExI] atheism In-Reply-To: <504420.54703.qm@web81603.mail.mud.yahoo.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> <504420.54703.qm@web81603.mail.mud.yahoo.com> Message-ID: <710b78fc1001061748ieccfbfyd4a3393fff282542@mail.gmail.com> 2010/1/7 Kevin Freels : > > It is quite different to say "I am convinced there is no God" than it is to > say "I am not convinced there is a God" > There is no evidence disproving the existence of God so to believe there is > no god is indeed a faith in itself. It is quite different to say "I am convinced there is no Flying Spagetti Monster" than it is to say "I am not convinced there is a Flying Spagetti Monster" There is no evidence disproving the existence of the Flying Spagetti Monster, so to believe there is no Flying Spagetti Monster is indeed a faith in itself. If you open your mind far enough, your brain will fall out. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From gts_2000 at yahoo.com Thu Jan 7 02:07:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 6 Jan 2010 18:07:59 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <165235.93343.qm@web36505.mail.mud.yahoo.com> --- On Wed, 1/6/10, Stathis Papaioannou wrote: >> I don't think Searle ever considered a thought experiment exactly like >> the one we created here. > > He did... You've merely re-quoted that same paragraph from that same Chalmers paper that you keep referencing. That experiment hardly compares to your much more ingenious one. :) As you point out: > He is discussing here the replacement of neurons in the > visual cortex.... But here we do something much more profound and dramatic: we replace the semantic center(s) of the brain, presumably integral to both spoken and unspoken thought. > He agrees that it is possible to make functionally identical computerised > neurons because he accepts that physics is computable. He accepts that physics is computable, and that the brain is computable, but he certainly would not agree that your p-neurons act "functionally identical" to b-neurons if we include in that definition c-neuron capability. > However, he believes that consciousness will become > decoupled from behaviour: the patient will become blind, will realise he > is blind and try to cry out, but he will hear himself saying that > everything is normal and will be powerless to do anything about it. That > would only be possible if the patient is doing his thinking with > something other than his brain... Looks to me that he does his thinking with that portion of his natural brain that still exists. Searle goes on to describe how as the experiment progresses and more microchips take the place of those remaining b-neurons, the remainder of his natural brain vanishes along with his experience. > ...he has always claimed that thinking is done with the brain and there > is no immaterial soul. Right. So perhaps Searle used some loose language in a few sentences and perhaps you misinterpreted him based on those sentences from a single paragraph taken out of context in paper written by one his critics. Better to look at his entire philosophy. >> The surgeon starts with a patient with a semantic >> deficit caused by a brain lesion in Wernicke's area. He >> replaces those damaged b-neurons with p-neurons believing >> just as you do that they will behave and function in every >> respect exactly as would have the healthy b-neurons that >> once existed there. However on my account of p-neurons, they >> do not resolve the patient's symptoms and so the surgeon >> goes back in to attempt more cures, only creating more >> semantic issues for the patient. > > Can you explain why you think the p-neurons won't be > functionally identical? You didn't reply to a fairly lengthy post of mine yesterday so perhaps you missed my answer to that question. I'll cut, paste and add to my own words... You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism, the view that experience plays no role in behavior. If you accept epiphenomenalism and reject the common and in my opinion more sensible view that experience does affect behavior then we need to discuss that philosophical problem before we can go forward. (Should we?) Speaking as one who rejects epiphenomenalism, it looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. Why? Because... 1) experience affects behavior, and 2) behavior includes neuronal behavior, and 3) experience of one's own understanding of words counts as a very important kind of experience, It follows that: Non-c-neurons in the semantic center of the brain will not behave like b-neurons. And because the p-neurons in Cram's brain in my view equal non-c-neurons, they won't behave like the b-neurons they replaced. Does that make sense to you? I hope so. This conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he would otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons throughout his entire brain until his patient appears ready for life on the streets, zombifying much or all his brain in the process. > It seems that you do believe (unlike Searle) that there is > something about neuronal behaviour that is not computable, No I don't suppose anything non-computable about them. But I do believe that mere computational representations of b-neurons, (aka p-neurons), do not equal c-neurons. > otherwise there would be nothing preventing the creation of p-neurons > that are drop-in replacements for b-neurons, guaranteed to leave > behaviour unchanged. See above re: epiphenomenalism. -gts From nymphomation at gmail.com Thu Jan 7 02:24:55 2010 From: nymphomation at gmail.com (*Nym*) Date: Thu, 7 Jan 2010 02:24:55 +0000 Subject: [ExI] atheism In-Reply-To: <710b78fc1001061748ieccfbfyd4a3393fff282542@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> <504420.54703.qm@web81603.mail.mud.yahoo.com> <710b78fc1001061748ieccfbfyd4a3393fff282542@mail.gmail.com> Message-ID: <7e1e56ce1001061824p26ce75ddmffb45b8fbec10252@mail.gmail.com> 2010/1/7 Emlyn : > 2010/1/7 Kevin Freels : >> >> It is quite different to say "I am convinced there is no God" than it is to >> say "I am not convinced there is a God" >> There is no evidence disproving the existence of God so to believe there is >> no god is indeed a faith in itself. > > It is quite different to say "I am convinced there is no Flying > Spagetti Monster" than it is to say "I am not convinced there is a > Flying Spagetti Monster" > There is no evidence disproving the existence of the Flying Spagetti > Monster, so to believe there is no Flying Spagetti Monster is indeed a > faith in itself. I don't believe in the Flying Spaghetti Monster is much easier to defend than I believe there's no Flying Spaghetti Monster. Perhaps christians/pastafarians are framing remarks to trap atheists into having to backtrack in debates? Well, the ones who can actually spell and construct sentences at least. Just a thought.. Heavy splashings, Thee Nymphomation 'If you cannot afford an executioner, a duty executioner will be appointed to you free of charge by the court' From emlynoregan at gmail.com Thu Jan 7 03:28:19 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 7 Jan 2010 13:58:19 +1030 Subject: [ExI] atheism In-Reply-To: <7e1e56ce1001061824p26ce75ddmffb45b8fbec10252@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> <504420.54703.qm@web81603.mail.mud.yahoo.com> <710b78fc1001061748ieccfbfyd4a3393fff282542@mail.gmail.com> <7e1e56ce1001061824p26ce75ddmffb45b8fbec10252@mail.gmail.com> Message-ID: <710b78fc1001061928t4b2038d0k245690f7eed1d0c7@mail.gmail.com> 2010/1/7 *Nym* : > 2010/1/7 Emlyn : >> 2010/1/7 Kevin Freels : >>> >>> It is quite different to say "I am convinced there is no God" than it is to >>> say "I am not convinced there is a God" >>> There is no evidence disproving the existence of God so to believe there is >>> no god is indeed a faith in itself. >> >> It is quite different to say "I am convinced there is no Flying >> Spagetti Monster" than it is to say "I am not convinced there is a >> Flying Spagetti Monster" >> There is no evidence disproving the existence of the Flying Spagetti >> Monster, so to believe there is no Flying Spagetti Monster is indeed a >> faith in itself. > > I don't believe in the Flying Spaghetti Monster is much easier to > defend than I believe there's no Flying Spaghetti Monster. Perhaps > christians/pastafarians are framing remarks to trap atheists into > having to backtrack in debates? Well, the ones who can actually spell > and construct sentences at least. > There is certainly a lot of abuse of the word "belief". Belief in something or in the lack of something doesn't automatically make you religious. There's not 100% certainty of much anything in this world, but if I say that I believe the sun will rise tomorrow, which I do believe and am saying, that doesn't make me equivalently irrational to someone who says they believe in the judeo christan god. Samantha's words were "belief regardless of evidence or argument", and largely everyone has focussed on "evidence" and ignored "argument". There's no evidence that the Sun will rise tomorrow (we can't know the future), without the assumption, the belief in the argument, that the past can be used to predict the future according to certain rules (which might in turn rest on belief in the usefulness of the laws of logic). It's ok to believe stuff, if you have a supportable reason to. Belief that there is no god, much less no FSM, is supportable (occam's razor basically) in a way that belief in any particular deity + system of worship is not. These two uses of the word "belief" are of a different class. Perhaps you could say "supportable belief" and "unsupportable belief". -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From femmechakra at yahoo.ca Thu Jan 7 03:20:23 2010 From: femmechakra at yahoo.ca (Anna Taylor) Date: Wed, 6 Jan 2010 19:20:23 -0800 (PST) Subject: [ExI] Psi (but read it before you don't read it) Message-ID: <92320.5151.qm@web110414.mail.gq1.yahoo.com> JOSHUA JOB nanite1018 at gmail.com? wrote: >>I reject all types of this stuff because there is no conceivable way in >>which they could operate. That's absolutely true.? The thing with "Psi" is that it's not mathematically possible and not yet computable.? Rejecting it is like saying that it will never be possible.? I'm surprised every time a bright mind rejects something that is all about imagination as if any scientific accomplishment was solely based on the calculations as opposed to the knowledge behind it.? >>The information you would have to analyze in order to gain any >>information about things in the future, or in other regions, or that is >>actually happening in someones mind would require a sensitivity and >>processing capacity so far beyond what our minds are capable of it is >>astonishing. It could be that it's beyond your capability.? Are you saying that people that study human behaviour don't have a better grasp of personality types, racial differences and/or norm behaviours? To analyze any abundance amount of data alone could take one person an eternity or lifetime and could still never amount to anything but that doesn't mean that it's not possible especially when cases are surpassing the scientific refusal.? ? ??? >>Our senses are very limited in terms of their acuity. Some people's senses are very strong.? They may not be exact but they are much more accurate than they used to be:)? >>Since I reject the very concept of an extra-body soul as meaningless and >>without ground in empirical evidence, I can see no way that any of this >>stuff could actually be real. I'm not sure who you are to reject anything as I have no idea what you have done, said or written but I'm pretty sure I can find a numerous amount of people that can describe an extra-body "soul" (or otherwise "outer", "miracle", "relevance not known" and/or "feeling") experience.? >>Telepathy or empathy are likely a result of a Sherlock Holmesian >>attention to detail and an extremely good sense of body language >>analysis, etc. Listening plays a huge role within telepathy and feelings play a huge role in empathy.???Telepathy as defined is ignorant.? To really believe that someone can solely humanly "hear someones mental thoughts" is rather childish but when you actually take the time to listen you can learn a great abundance of what people think.? >>Other things, like precognition, are meaningless and are certainly the >>result of a combination of practical psychology (predicting what people >>will do based on knowledge about them), and of course luck and chance. I don't really believe that anyone can see "the Future".? If you are so closed off about the idea of practical psychology and how it can "uffect" then you just don't see anything grander than your limited calculations.? That's too bad because to analyze is great but "out of the box ideas or experiences", imagination and a creative role helps to change and make a difference.? Refusing an idea simply because no one has proved it is simply unimaginative.? I guess you like your box:) >>On a related note, the metaphysical studies (i.e. astrology, psychic, >>occult, new age; alternatively called "crap") section of my local >>Borders is now approximately equal in size to the philosophy (which is >>no longer labelled on any of the signs) and the science sections >>combined. I find that sad. I got bored with math and economics a long time ago. (I really didn't feel like taking the time to fully understand it). Thinking that maybe some people are good at some things while others are good at different things may give you a wider perspective on the whole "Psi" thing.? Anna:) __________________________________________________________________ Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now http://ca.toolbar.yahoo.com. From stathisp at gmail.com Thu Jan 7 06:52:33 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jan 2010 17:52:33 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <165235.93343.qm@web36505.mail.mud.yahoo.com> References: <165235.93343.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/7 Gordon Swobe : > --- On Wed, 1/6/10, Stathis Papaioannou wrote: > >>> I don't think Searle ever considered a thought experiment exactly like >>> the one we created here. >> >> He did... > > You've merely re-quoted that same paragraph from that same Chalmers paper that you keep referencing. That experiment hardly compares to your much more ingenious one. :) > > As you point out: > >> He is discussing here the replacement of neurons in the >> visual cortex.... > > But here we do something much more profound and dramatic: we replace the semantic center(s) of the brain, presumably integral to both spoken and unspoken thought. You can see though that it's just a special case. We could replace neurons in any part of the brain, affecting any aspect of cognition. >> He agrees that it is possible to make functionally identical computerised >> neurons because he accepts that physics is computable. > > He accepts that physics is computable, and that the brain is computable, but he certainly would not agree that your p-neurons act "functionally identical" to b-neurons if we include in that definition c-neuron capability. Functionally identical *except* for consciousness, in the same way that a philosophical zombie is functionally identical except for consciousness. All a p-neuron has to do is pass as a normal neuron as far as the b-neurons are concerned, i.e. produce the same outputs in response to the same inputs. Are you claiming that it is possible for a zombie to fool intelligent and fully conscious humans but impossible for a p-neuron to fool b-neurons? That doesn't sound plausible, but if it is the case, it simply means that there is something about the behaviour of neurons which is not computable. You can't say both that the behaviour of neurons is computable *and* that it's impossible to make p-neurons which behave like b-neurons. >> However, he believes that consciousness will become >> decoupled from behaviour: the patient will become blind, will realise he >> is blind and try to cry out, but he will hear himself saying that >> everything is normal and will be powerless to do anything about it. That >> would only be possible if the patient is doing his thinking with >> something other than his brain... > > Looks to me that he does his thinking with that portion of his natural brain that still exists. Searle goes on to describe how as the experiment progresses and more microchips take the place of those remaining b-neurons, the remainder of his natural brain vanishes along with his experience. Yes, but the problem is that the natural part of his brain is constrained to behave in the same way as if there had been no replacement, since the p-neurons send it the same output. It's impossible for the rest of the brain to behave differently. Searle seems to acknowledge this because he accepts that the patient will behave normally, i.e. will have normal motor output. However, he thinks the patient will have abnormal thoughts which he will be unable to communicate! Where do these thoughts come from, if all the b-neurons in the brain are behaving normally? They can only come from something other than the neurons. If you have another explanation, please provide it. >> ...he has always claimed that thinking is done with the brain and there >> is no immaterial soul. > > Right. So perhaps Searle used some loose language in a few sentences and perhaps you misinterpreted him based on those sentences from a single paragraph taken out of context in paper written by one his critics. Better to look at his entire philosophy. This is *serious* problem for Searle, invalidating his entire thesis that it is possible to make brain components that behave normally but lack consciousness. It simply isn't possible. I think even you are seeing this, since to avoid the problem you now seem to be suggesting that it isn't really possible to make zombie p-neurons at all. >>> The surgeon starts with a patient with a semantic >>> deficit caused by a brain lesion in Wernicke's area. He >>> replaces those damaged b-neurons with p-neurons believing >>> just as you do that they will behave and function in every >>> respect exactly as would have the healthy b-neurons that >>> once existed there. However on my account of p-neurons, they >>> do not resolve the patient's symptoms and so the surgeon >>> goes back in to attempt more cures, only creating more >>> semantic issues for the patient. >> >> Can you explain why you think the p-neurons won't be >> functionally identical? > > You didn't reply to a fairly lengthy post of mine yesterday so perhaps you missed my answer to that question. I'll cut, paste and add to my own words... > > You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism, the view that experience plays no role in behavior. > > If you accept epiphenomenalism and reject the common and in my opinion more sensible view that experience does affect behavior then we need to discuss that philosophical problem before we can go forward. (Should we?) > > Speaking as one who rejects epiphenomenalism, it looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. Why? > > Because... > > 1) experience affects behavior, and > 2) behavior includes neuronal behavior, and > 3) experience of one's own understanding of words counts as a very important kind of experience, > > It follows that: > > Non-c-neurons in the semantic center of the brain will not behave like b-neurons. And because the p-neurons in Cram's brain in my view equal non-c-neurons, they won't behave like the b-neurons they replaced. > > Does that make sense to you? I hope so. It makes sense. You are saying that the NCC affects neuronal behaviour, and the NCC is that part of neuronal behaviour that cannot be simulated by computer, since if it could you could program the p-neurons to adjust their I/O behaviour accordingly. Therefore, neurons must contain uncomputable physics in the NCC. > This conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he would otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons throughout his entire brain until his patient appears ready for life on the streets, zombifying much or all his brain in the process. > >> It seems that you do believe (unlike Searle) that there is >> something about neuronal behaviour that is not computable, > > No I don't suppose anything non-computable about them. But I do believe that mere computational representations of b-neurons, (aka p-neurons), do not equal c-neurons. There *must* be something uncomputable about the behaviour of neurons if it can't be copied well enough to make p-neurons, artificial neurons which behave exactly like b-neurons but lack the essential ingredient for consciousness. This isn't a contingent fact, it's a logical requirement. -- Stathis Papaioannou From florent.berthet at gmail.com Thu Jan 7 07:33:35 2010 From: florent.berthet at gmail.com (Florent Berthet) Date: Thu, 7 Jan 2010 08:33:35 +0100 Subject: [ExI] Psi (but read it before you don't read it) In-Reply-To: <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> Message-ID: <6d342ad71001062333i2b0ab2edi893866d1eba32a0@mail.gmail.com> The data so far : http://xkcd.com/373/ 2010/1/6 JOSHUA JOB > I saw that Damien was talking about psi. I don't know what most of you >> think about it. It is good to have a crowd that at least has an opinion on >> it one way or the other though. >> > I reject all types of this stuff because there is no conceivable way in > which they could operate. The information you would have to analyze in order > to gain any information about things in the future, or in other regions, or > that is actually happening in someone's mind would require a sensitivity and > processing capacity so far beyond what our minds are capable of it is > astonishing. Our senses are very limited in terms of their acuity. > > Since I reject the very concept of an extra-body soul as meaningless and > without ground in empirical evidence, I can see no way that any of this > stuff could actually be real. Telepathy or empathy are likely a result of a > Sherlock Holmesian attention to detail and an extremely good sense of body > language analysis, etc. Other things, like precognition, are meaningless and > are certainly the result of a combination of practical psychology > (predicting what people will do based on knowledge about them), and of > course luck and chance. > > On a related note, the metaphysical studies (i.e. astrology, psychic, > occult, new age; alternatively called "crap") section of my local Borders is > now approximately equal in size to the philosophy (which is no longer > labeled on any of the signs) and the science sections combined. I find that > sad. > > > Joshua Job > nanite1018 at gmail.com > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Thu Jan 7 08:44:54 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 7 Jan 2010 00:44:54 -0800 (PST) Subject: [ExI] Some new angle about AI Message-ID: <383075.6531.qm@web65607.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stefano Vaj > To: ExI chat list > Sent: Tue, January 5, 2010 3:11:58 AM > Subject: Re: [ExI] Some new angle about AI > > 2009/12/30 The Avantguardian : > > Well some hints are more obvious than others. ;-) > > http://www.hplusmagazine.com/articles/bio/spooky-world-quantum-biology > > http://www.ks.uiuc.edu/Research/quantum_biology/ > > It is not that I do not know the sources, Penrose in the first place. > Car engines are also made of molecules, which are made of atoms, and > ultimately are the expression of an underlying quantum reality.? What > I find unpersuasive is the theory that life, however defined, is > anything special amongst high-level chemical reactions. Well what other "high-level chemical reactions" are there to compare life to? Flames don't run away when you try to extinguish them. Motile bacteria do. ? > It may very well be the case that quantum computation is in a sense > pervasive, but again I do not see why life, however defined, would be > a special case in this respect, since I? do not see organic brains > exhibiting quantum computation features any more than, say, PCs, and I > suspect that "biological anticipations", etc., are more in the nature > of "optical artifacts" like the Intelligent Design of organisms. I think a lot of the quantum computation goes on below the conscious threshold, things so simple that most people take for granted. Things like facial recognition which happen nearly instaneously with the brain but?take standard?computers running algorithms?quite a bit of time to accomplish. Shooting billiards, playing dodgeball, writing a novel, seducing a lover, I?imagine a lot of quantum computing goes into these things. Besides you seem to totally discount the fact that brains formed the very concept of quantum mechanics and quantum computing in the first place. ? > >> There again, the theoretical issue would be simply that of executing a > >> program emulating what we execute ourselves closely enough to qualify > >> as "human-like" for arbitrary purposes, and find ways to implement it > >> in manner not making us await its responses for multiples of the > >> duration of the Universe... ;-) > > > > In order to do so, it would have to consider a superposition of?every possible > response and collapse?the ouput?"wavefunction" on the most appropriate response. > > *If* organic brains actually do some quantum computing. Now, I still > have to see any human being solving a typical quantum computing > problem with a pencil and a piece of paper... ;-) Perhaps the ability to generalize from specific observations is a quantum computation. A child needs to see no more than a few trees to start recognizing types of trees that he has never seen before as "trees" as opposed to "poles" or "towers". In?some sense the generic visual concept of "tree" might somehow be processed? as a superposition of every type of tree?from an oak to a cedar to a sequoia. ? ?Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.." - Neil Armstrong From avantguardian2020 at yahoo.com Thu Jan 7 11:10:12 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 7 Jan 2010 03:10:12 -0800 (PST) Subject: [ExI] World of Statecraft Message-ID: <293329.31522.qm@web65607.mail.ac4.yahoo.com> I notice that a lot of debate on the list take the form of?debates over sociopolitical idealogies and the relative merits of each utilizing a very limited pool of historical examples:?capitalism versus?socialism versus minarchism versus populism versus democracy versus fascism etc.?These debates seem to become very acrimonious and people seem to invest a lot of emotion in their chosen ideology on what amounts to little more than faith in the status quo. Admittedly I haven't put a lot of thought into it so it is still?a very rough idea, but it occured to me that modified MMORPGs would make a great "laboratory" of sorts to empirically compare all the possible ideologies with one another in a risk-free controlled setting. One would simply?need to eliminate computer?generated?"antagonists" and simply have the world populated by actual players with characteristics and abilities similar to any of the dozens of existing MMORPGs but more "down to earth". The players could form whatever types of?"states" that they wanted?and compete against each other for some predetermined periods time with the servers keeping track of metrics of success and failure?of the various "states"?resulting?from the aggregate behavior of the individual players. One could simulate wars and markets?and whatever else. This?way dozens of civilizations could rise and fall within the?space of a few years of real time?and the reasons for each could?be analyzed by political scientists and economists and the lessons could be applied to the real world. Admittedly this might not be as fun as scorching?hordes of computer generated orcs with magical fireballs, but?it could be funded by grant money sufficient to pay the participants some small amount of cash for their participation as "research subjects". ?Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From stefano.vaj at gmail.com Thu Jan 7 12:30:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 7 Jan 2010 13:30:41 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <383075.6531.qm@web65607.mail.ac4.yahoo.com> References: <383075.6531.qm@web65607.mail.ac4.yahoo.com> Message-ID: <580930c21001070430s4032aa2el8c63b979be8d7c@mail.gmail.com> 2010/1/7 The Avantguardian : > Well what other "high-level chemical reactions" are there to compare life to? Flames don't run away when you try to extinguish them. Motile bacteria do. How would that qualify as a quantum effect? :-/ > I think a lot of the quantum computation goes on below the conscious threshold, things so simple that most people take for granted. Things like facial recognition which happen nearly instaneously with the brain but?take standard?computers running algorithms?quite a bit of time to accomplish. Of course, organic brains have evolved to do (relatively) well what they do, but this does not tell us anything about their low-level working, nor that they would escape the Principle of Computational Equivalence as far as their... computing features are concerned (the jury may still be out on some other aspects of their working). The fact that a Motorola processor used to run Windows less efficiently than an Intel processor does not really suggest that the second is a quantum computer. And plenty of phenomena which have apparently little to do with quantum effects are more or less heavy to emulate or computationally intractable. See the weather. Or, once more, the plenty of examples discussed in a New Kind of Science... > Shooting billiards, playing dodgeball, writing a novel, seducing a lover, I?imagine a lot of quantum computing goes into these things. Why? Conversely, I am not aware of *even a single feature* of any hypothetical quantum computer which is easily emulated by organic brains. Take for instance integer factorisation. Or any other prob where quantum computing would make a difference. "Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees." (from Wikipedia). If you had a quantum computer in your head, all that should be a piece of bread, once you have learned the appropriate algorithm. It is on the contrary the case that we are *way* better at, say, additions of small integers or Boolean algebra. And, by the way, most natural organic brains have no chances whatsoever to learn how shooting billiards, playing dodgeball, writing a novel, seducing a lover, no matter how much training effort you put into it, even though their underlying principles appear pretty similar to one another... -- Stefano Vaj From gts_2000 at yahoo.com Thu Jan 7 12:51:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 7 Jan 2010 04:51:21 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <525969.17123.qm@web36505.mail.mud.yahoo.com> --- On Thu, 1/7/10, Stathis Papaioannou wrote: > It makes sense. You are saying that the NCC affects > neuronal behaviour, and the NCC is that part of neuronal > behaviour that cannot be simulated by computer Not quite. I said *experience* affects behavior, and I did not say we could not simulate the NCC on a computer. Where the NCC (neural correlates of consciousness) exist in real brains, experience exists, and the NCC correlate. (That's why the second "C" in NCC.) Think of it this way: consciousness exists in real brains in the presence of the NCC as solidity of real water exists in the presence of temperatures at or below 32 degrees Fahrenheit. You can simulate ice cubes on your computer but those simulated ice cubes won't help keep your processor from overheating. Likewise, you can simulate brains on your computer but that simulated brain won't have any real experience. In both examples, you have merely computed simulations of real things. > Therefore, [you think I mean to say that] neurons must contain > uncomputable physics in the NCC. But I don't mean that. Look again at my ice cube analogy! -gts From gts_2000 at yahoo.com Thu Jan 7 13:30:18 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 7 Jan 2010 05:30:18 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <525969.17123.qm@web36505.mail.mud.yahoo.com> Message-ID: <527984.77632.qm@web36507.mail.mud.yahoo.com> Stathis, I wrote: > Where the NCC (neural correlates of consciousness) exist in > real brains, experience exists, and the NCC correlate. > (That's why the second "C" in NCC.) I meant the first "C", of course. The NCC *correlate*. If we knew exactly what physical conditions must exist in the brain for consciousness to exist, i.e., if we knew everything about the NCC, then we could perfectly simulate those physical conditions on a computer. And someday we will do this. But that computer simulation will have only weak AI for the same reason that simulated ice cubes won't cool your computer's processor. I understand why you want to say that I must therefore think consciousness exists outside the material world, or that I think we cannot compute the brain. But that's not what I mean at all. I see consciousness as just a state that the brain can be in. We can simulate that brain-state on a computer just as we can simulate the solid state of water. -gts From painlord2k at libero.it Thu Jan 7 13:44:18 2010 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 07 Jan 2010 14:44:18 +0100 Subject: [ExI] World of Statecraft In-Reply-To: <293329.31522.qm@web65607.mail.ac4.yahoo.com> References: <293329.31522.qm@web65607.mail.ac4.yahoo.com> Message-ID: <4B45E532.9030409@libero.it> Il 07/01/2010 12.10, The Avantguardian ha scritto: > I notice that a lot of debate on the list take the form of debates > over sociopolitical idealogies and the relative merits of each > utilizing a very limited pool of historical examples: capitalism > versus socialism versus minarchism versus populism versus democracy > versus fascism etc. These debates seem to become very acrimonious > and people seem to invest a lot of emotion in their chosen ideology > on what amounts to little more than faith in the status quo. > Admittedly I haven't put a lot of thought into it so it is still a > very rough idea, but it occured to me that modified MMORPGs would > make a great "laboratory" of sorts to empirically compare all the > possible ideologies with one another in a risk-free controlled > setting. Yes, they do. For example, EVE Online is considered to have the best economy. Near any thing in the game is mined and built by the players and can be sold or bought on the open market, with contracts and with direct exchange. Remarkable is the fact that the behavior of the markets is similar to what the theory say. For example, the distribution of market hubs is near exactly what the theory say. The prices move like the theory say they would. > One would simply need to eliminate computer generated "antagonists" > and simply have the world populated by actual players with > characteristics and abilities similar to any of the dozens of > existing MMORPGs but more "down to earth". Well, in EVE, but in many other MMORG, the NPCs are not "antagonists", they are "resources" to harvest in a more or less organized way. The real antagonists and enemies are other players and other player's corporations and alliances that compete for the control/sovranity over the 0.0 security areas and their resources. This part of the game is very much as political as militaristic as economic. Huge battles are fought, groups change side, leave for greener pasture (or simply quieter ones) and enormous quantity of resources are spent or change hands or are invested. To give numbers, many battles normally could be waged by more than 100 pilots on a side and the same for the other. 200-300 pilots in the same fleet are not rare. Given the rules of the game, Madoff-style scams are OK in-game, speculations on the market are OK, economic warfare is OK, infiltrating enemy corporations and alliances to steal and destroy stuff is OK. If it is possible by the game mechanics, it is OK. > The players could form whatever types of "states" that they wanted > and compete against each other for some predetermined periods time > with the servers keeping track of metrics of success and failure of > the various "states" resulting from the aggregate behavior of the > individual players. One could simulate wars and markets and whatever > else. This way dozens of civilizations could rise and fall within the > space of a few years of real time and the reasons for each could be > analyzed by political scientists and economists and the lessons could > be applied to the real world. EVE have a real economist surveying the economy and releasing quarterly analyses of how the economy work. Dr Eyjol Gudmondsson (formerly of the University of Iceland) http://www.gamesindustry.biz/articles/star-bucks http://www.pcpro.co.uk/news/122840/virtual-world-hires-real-economist >> "As a real economist I had to spend months trying to find data to >> test an economic theory but if I was wrong, I wasn't sure if the >> theory was wrong or the data was wrong. At least here I know the >> data is right," Guodmundsson said. >> >> As new players join, CCP adds new planets and asteroids that can be >> exploited, one of several "faucets" that serve to inject funds into >> the universe and keep the economy ticking. >> >> "After we opened up an area where there was more zydrine (an >> in-game mineral), we saw that price dropped. We did not announce >> that there was more explicitly, but in a matter of days the price >> had adjusted," Guodmundsson said. > Admittedly this might not be as fun as scorching hordes of computer > generated orcs with magical fireballs, but it could be funded by > grant money sufficient to pay the participants some small amount of > cash for their participation as "research subjects". Scorching CG orcs with fireballs is boring. Scorching human generated adversaries in many ways is funnier. The point, like in EVE, is having all the players in the same shared world. Not separated instances. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.725 / Database dei virus: 270.14.129/2605 - Data di rilascio: 01/07/10 08:35:00 From stathisp at gmail.com Thu Jan 7 14:03:59 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 01:03:59 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <525969.17123.qm@web36505.mail.mud.yahoo.com> References: <525969.17123.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/7 Gordon Swobe : > --- On Thu, 1/7/10, Stathis Papaioannou wrote: > >> It makes sense. You are saying that the NCC affects >> neuronal behaviour, and the NCC is that part of neuronal >> behaviour that cannot be simulated by computer > > Not quite. I said *experience* affects behavior, and I did not say we could not simulate the NCC on a computer. "Experience" can only affect behaviour by moving stuff. How does the stuff get moved? What would have to happen is something like this: the NCC molecule attaches to certain ion channels, changing their conformation and thereby allowing an influx of sodium ions, depolarising the cell membrane; and this event constitutes a little piece of experience. So while you claim "experience" cannot be simulated, you allow that the physical events associated with the experience can be simulated, which means every aspect of the neuron's behaviour can be simulated. > Where the NCC (neural correlates of consciousness) exist in real brains, experience exists, and the NCC correlate. (That's why the second "C" in NCC.) > > Think of it this way: consciousness exists in real brains in the presence of the NCC as solidity of real water exists in the presence of temperatures at or below 32 degrees Fahrenheit. > > You can simulate ice cubes on your computer but those simulated ice cubes won't help keep your processor from overheating. Likewise, you can simulate brains on your computer but that simulated brain won't have any real experience. In both examples, you have merely computed simulations of real things. If you want the computer to interact with the world you have to attach it to I/O devices which are not themselves computers. For example, the computer could be attached to a peltier device in order to simulate the cooling effect that an ice cube would have on the processor. >> Therefore, [you think I mean to say that] neurons must contain >> uncomputable physics in the NCC. > > But I don't mean that. Look again at my ice cube analogy! The question of whether it is possible to put a computer in a neuron suit so that its behaviour is, to other neurons, indistinguishable from a natural neuron is equivalent to the question of whether a robot can impersonate a human well enough so that other humans can't tell that it's a robot. I know you believe the robot human would lack intentionality, but you have (I think) agreed that despite this handicap it would be able to pass the TT, pretend to have emotions, and so on, as it would have to do in order to qualify as a philosophical zombie. So are you now saying that while a zombie robot human presents no theoretical problem, a zombie robot neuron, which after all only needs to reproduce much simpler behaviour and only needs to fool other neurons, would be impossible? -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 7 14:10:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 7 Jan 2010 06:10:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <446942.27612.qm@web36504.mail.mud.yahoo.com> --- On Thu, 1/7/10, Stathis Papaioannou wrote: > There *must* be something uncomputable about the behaviour of neurons... No. >... if it can't be copied well enough to make p-neurons, > artificial neurons which behave exactly like b-neurons but lack the > essential ingredient for consciousness. This isn't a contingent fact, > it's a logical requirement. Yes and now you see why I claim Cram's surgeon must go in repeatedly to patch the software until his patient passes the Turing test: because the patient has no experience, the surgeon must keep working to meet your logical requirements. The surgeon finally gets it right with Service Pack 9076. Too bad his patient can't know it. -gts From painlord2k at libero.it Thu Jan 7 14:19:25 2010 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 07 Jan 2010 15:19:25 +0100 Subject: [ExI] World of Statecraft In-Reply-To: <293329.31522.qm@web65607.mail.ac4.yahoo.com> References: <293329.31522.qm@web65607.mail.ac4.yahoo.com> Message-ID: <4B45ED6D.5080102@libero.it> Il 07/01/2010 12.10, The Avantguardian ha scritto: > I notice that a lot of debate on the list take the form of debates over sociopolitical idealogies and the relative merits of each utilizing a very limited pool of historical examples: capitalism versus socialism versus minarchism versus populism versus democracy versus fascism etc. These debates seem to become very acrimonious and people seem to invest a lot of emotion in their chosen ideology on what amounts to little more than faith in the status quo. > > Admittedly I haven't put a lot of thought into it so it is still a very rough idea, but it occured to me that modified MMORPGs would make a great "laboratory" of sorts to empirically compare all the possible ideologies with one another in a risk-free controlled setting. One would simply need to eliminate computer generated "antagonists" and simply have the world populated by actual players with characteristics and abilities similar to any of the dozens of existing MMORPGs but more "down to earth". The players could form whatever types of "states" that they wanted and compete against each other for some predetermined periods time with the servers keeping track of metrics of success and failure of the various "states" resulting from the aggregate behavior of the individual players. One could simulate wars and markets and whatever else. This way dozens of civilizations could rise and fall within the space of a few years of real time and the > reasons for each could be analyzed by political scientists and economists and the lessons could be applied to the real world. Admittedly this might not be as fun as scorching hordes of computer generated orcs with magical fireballs, but it could be funded by grant money sufficient to pay the participants some small amount of cash for their participation as "research subjects". By the way I found this: Virtual Economy Research Network http://virtual-economy.org/ Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.725 / Database dei virus: 270.14.129/2605 - Data di rilascio: 01/07/10 08:35:00 From stathisp at gmail.com Thu Jan 7 14:40:52 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 01:40:52 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/7 Aware : > As I've said already (three times in this thread) it seems that > everyone here (and Searle) would agree with the functionalist > position: that perfect copies must be identical, and thus > functionalism needs no defense. The functionalist position is that a different machine performing the same function would produce the same mind. Searle and everyone on this list does not agree with this, nor to be fair is it trivially obvious. > Stathis continues to argue on the basis of functional identity, since > he doesn't seem to see how there could be anything more to the > question. [I know Stathis had a copy of Hofstadter's _I AM A STRANGE > LOOP_, but I suspect he didn't finish it.] I got to chapter 11, as it happens, and I did mean to finish it but still haven't. I agree with Hofstdter's, and your, epiphenomenalism. I usually only contribute to this list when I *disagree* with what someone says and feel that I have a significant argument to present against it. I'm better at criticising and destroying than praising and creating, I suppose. The argument with Gordon does not involve proposing or defending any theory of consciousness, but simply looks at the consequences of the idea that it is possible for a machine to reproduce behaviour but not thereby necessarily reproduce the original consciousness. It's not immediately obvious that this is a silly idea, and a majority of people probably believe it. However, it can be shown to be internally inconsistent, and without invoking any assumptions other than that consciousness is a naturalistic phenomenon. -- Stathis Papaioannou From jonkc at bellsouth.net Thu Jan 7 15:24:59 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 7 Jan 2010 10:24:59 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> Message-ID: On Jan 6, 2010, at 1:20 PM, x at extropica.org wrote: Me: >> we learned from the history of Evolution that consciousness is easy but >> intelligence is hard. > > So why don't you agree with me that intelligence must have "existed" > (been recognizable, if there had been an observer) for quite a long > time Because we learned from the history of Evolution that consciousness is easy but intelligence is hard. > before evolutionary processes stumbled upon the additional, > supervisory, hack of self-awareness What you just said is logically absurd. If consciousness doesn't effect intelligence then there is no way Evolution could have "stumbled upon" the trick of generating consciousness because it would convey no more adaptive advantage than eyes or pigment does for creatures that live all their life in dark caves. In short if even one conscious being exists on Planet Earth and if Evolution is true then the Turing Test works; and if the Turing Test doesn't work then neither does Evolution. > novel hacks like self-awareness discovered at some point, exploited > for the additional fitness they confer Fine, if true and consciousness aids fitness then it can be deduced from behavior. Either you can have intelligence without consciousness or you can not. The propositions lead to mutually contradictory conclusions, they can't both be right and you can't claim both as your own. You've got to make a choice. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Jan 7 16:04:21 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 7 Jan 2010 11:04:21 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <527984.77632.qm@web36507.mail.mud.yahoo.com> References: <527984.77632.qm@web36507.mail.mud.yahoo.com> Message-ID: <187B6A0A-A8FD-42EB-BC18-A2178641FC72@bellsouth.net> On Jan 7, 2010, Gordon Swobe wrote: > > If we knew exactly what physical conditions must exist in the brain for consciousness to exist, i.e., if we knew everything about the NCC, This NCC of yours is gibberish. You state very specifically that it is not the signals between neurons that produce consciousness, so how can some sort of magical awareness inside the neuron correlate with anything? You must have 100 billion independent conscious entities inside your head. > then we could perfectly simulate those physical conditions on a computer. And someday we will do this. Glad to hear it. > But that computer simulation will have only weak AI So even physical perfection is not enough for consciousness, something must still be missing. Let's see if we can deduce some of the properties of that something. Well first of all obviously it's non-physical, also it can't be detected by the Scientific Method, it can't be produced by Darwin's Theory of Evolution, and it starts with the letter "S". John K Clark > for the same reason that simulated ice cubes won't cool your computer's processor. > > I understand why you want to say that I must therefore think consciousness exists outside the material world, or that I think we cannot compute the brain. But that's not what I mean at all. I see consciousness as just a state that the brain can be in. We can simulate that brain-state on a computer just as we can simulate the solid state of water. > > -gts > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Jan 7 15:38:51 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 7 Jan 2010 10:38:51 -0500 Subject: [ExI] Psi (no need to read this post you already know what it says ) In-Reply-To: <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> Message-ID: On Jan 6, 2010, JOSHUA JOB wrote: > I reject all types of this stuff because there is no conceivable way in which they could operate. I reject Psi too but not for that reason, I reject it because there is not one particle of credible evidence that the fucking phenomenon exists. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Thu Jan 7 16:28:56 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 08:28:56 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Thu, Jan 7, 2010 at 6:40 AM, Stathis Papaioannou wrote: > 2010/1/7 Aware : > >> ... it seems that everyone here (and Searle) would agree with >> the functionalist position: that perfect copies must be identical, >> and thus functionalism needs no defense. > > The functionalist position is that a different machine performing the > same function would produce the same mind. Searle and everyone on this > list does not agree with this, nor to be fair is it trivially obvious. You say "different machine"; I would say "different substrate", but no matter. We in this discussion, including Gordon, plus Searle, are of a level of sophistication that none of us believes in a "soul in the machine". Most people in this forum (and other tech/geek forums) have gotten to that level of sophistication, where they can proudly enjoy looking down from their improved point of view and smugly denounce those who don't, while remaining blind to levels of meaning still higher and more encompassing. > >> Stathis continues to argue on the basis of functional identity, since >> he doesn't seem to see how there could be anything more to the >> question. [I know Stathis had a copy of Hofstadter's _I AM A STRANGE >> LOOP_, but I suspect he didn't finish it.] > > I got to chapter 11, as it happens, and I did mean to finish it but > still haven't. I didn't finish it either. I found it very disappointing (my expectations set by GEB) in its self-indulgence and its lack of any substantial new insight. However, it may be useful for some not already familiar with its ideas. > I agree with Hofstdter's, and your, epiphenomenalism. But it's not most people's idea of epiphenomenalism, where the "consciousness" they know automagically emerges from a system of sufficient complexity and configuration. Rather, its an epistemological understanding of the (recursive) relationship between the observer and the observed. > I usually only contribute to this list when I *disagree* with what > someone says and feel that I have a significant argument to present > against it. I'm better at criticising and destroying than praising and > creating, I suppose. It's always easier to criticize, but creating tends to be more rewarding. Praising tends to fall by the wayside among us INTJs. > The argument with Gordon does not involve > proposing or defending any theory of consciousness, Here I must disagree... > but simply looks > at the consequences of the idea that it is possible for a machine to > reproduce behaviour but not thereby necessarily reproduce the original > consciousness. Your insistence that it is this simple is prolonging the cycling of that "strange loop" you're in with Gordon. It's not always clear what Gordon's argument IS--often he seems to be parroting positions he finds on the Internet--but to the extent he is arguing for Searle, he is not arguing against functionalism. Given functionalism, and the "indisputable 1st person evidence" of the existence of consciousness/qualia/meaning/intensionality within the system ("where else could it be?"), he points out quite correctly that no matter how closely one looks, no matter how subtle one's formal description might be, there's syntax but no semantics in the system. So I suggest (again) to you and Gordon, and Searle. that you need to broaden your context. That there is no essential consciousness in the system, but in the recursive relation between the observer and the observed. Even (or especially) when the observer and observed are functions of he same brain, you get self-awareness entailing the reported experience of consciousness, which is just as good because it's all you ever really had. > It's not immediately obvious that this is a silly idea, > and a majority of people probably believe it. Your faith in functionalism is certainly a step up from the assumptions of the silly masses. But everyone in this discussion, and most denizens of the Extropy list, already get this. > However, it can be shown > to be internally inconsistent, and without invoking any assumptions > other than that consciousness is a naturalistic phenomenon. Yes, but that's not the crux of this disagreement. In fact, there is no crux of this disagreement since to resolve it is not to show what's wrong within, but to reframe it in terms of a larger context. Searle and Gordon aren't saying that machine consciousness isn't possible. If you pay attention you'll see that once in a while they'll come right out and say this, at which point you think they've expressed an inconsistency. They're saying that even though it's obvious that some machines (e.g. humans) do have consciousness, it's also clear that no formal system implements semantics. And they're correct. That's why this, and the perennial personal-identity debates tend to be so intractable: It's like the man looking for the car keys he dropped somewhere in the dark, but looking only around the lamppost, for the obvious reason that that's the only place he can see. Enlarge the context. - Jef From spike66 at att.net Thu Jan 7 16:33:37 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 08:33:37 -0800 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> Message-ID: <79471C131D7F4EE28EE05A725ED29AED@spike> ...On Behalf Of John Clark ... >I reject Psi too but not for that reason, I reject it because there is not one particle of credible evidence that the fucking phenomenon exists...John K Clark John I can assure you that the fucking phenomenon exists. But what has that to do with Psi? I don't see how the two are related in any way. spike From kanzure at gmail.com Thu Jan 7 17:27:08 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 7 Jan 2010 11:27:08 -0600 Subject: [ExI] Fwd: [neuro] Daily Mail on Markram In-Reply-To: <20100107160023.GC17686@leitl.org> References: <20100107160023.GC17686@leitl.org> Message-ID: <55ad6af71001070927ja5080aar149cf51a7e5ce4f8@mail.gmail.com> ---------- Forwarded message ---------- From: Eugen Leitl Date: Thu, Jan 7, 2010 at 10:00 AM Subject: [neuro] Daily Mail on Markram To: tt at postbiota.org, info at postbiota.org, neuro at postbiota.org (aargh, I guess) http://www.dailymail.co.uk/sciencetech/article-1240410/The-real-Frankenstein-experiment-One-mans-mission-create-living-mind-inside-machine.html?printingPage=true The real Frankenstein experiment: One man's mission to create a living mind inside a machine By Michael Hanlon Last updated at 8:30 AM on 04th January 2010 Professor Markram is planning to create the world's most expensive 'baby' His words staggered the erudite audience gathered at a technology conference in Oxford last summer. Professor Henry Markram, a doctor-turned-computer engineer, announced that his team would create the world's first artificial conscious and intelligent mind by 2018. And that is exactly what he is doing. On the shore of Lake Geneva, this brilliant, eccentric scientist is building an artificial mind. A Swiss - it could only be Swiss - precision- engineered mind, made of silicon, gold and copper. The end result will be a creature, if we can call it that, which its maker believes within a decade may be able to think, feel and even fall in love. Professor Markram's 'Blue Brain' project, must rank as one of the most extraordinary endeavours in scientific history. If this 47-year-old South-African Israeli is successful, then we are on the verge of realising an age-old fantasy, one first imagined when an adolescent Mary Shelley penned Frankenstein, her tale of an artificial monster brought to life - a story written, quite coincidentally, just a few miles from where this extraordinary experiment is now taking place. Success will bring with it philosophical, moral and ethical conundrums of the highest order, and may force us to confront what it means to be human. But Professor Markram thinks his artificial mind will render vivisection obsolete, conquer insanity and even improve our intelligence and ability to learn. What Markram's project amounts to is an audacious attempt to build a computerised copy of a brain - starting with a rat's brain, then progressing to a human brain - inside one of the world's most powerful computers. This, it is hoped, will bring into being a sentient mind that will be able to think, reason, express will, lay down memories and perhaps even experience love, anger, sadness, pain and joy. 'We will do it by 2018,' says the professor confidently. 'We need a lot of money, but I am getting it. There are few scientists in the world with the resources I have at my disposal.' There is, inevitably, scepticism. But even Markram's critics mostly accept that he is on to something and, most importantly, that he has the money. Tens of millions of euros are flooding into his laboratory at the Brain Mind Institute at the Ecole Polytechnique in Lausanne - paymasters include the Swiss government, the EU and private backers, including the computer giant IBM. Artificial minds are, it seems, big business. The human brain is the most complex object in the universe. But Markram insists that the latest supercomputers will soon have its measure. ?Professor Markram believes that if his 'Blue Brain' project is successful, it will render vivisection obsolete Professor Markram believes that if his 'Blue Brain' project is successful, it will render vivisection obsolete As I toured his glittering laboratories, it became clear that this is certainly no ordinary scientific endeavour. In fact, Markram's department looks like the interior of the Starship Enterprise, and is full of toys that would make James Bond's Q blush with envy. But how on earth do you build a brain in a computer? And haven't scientists been trying to do that build an electronic brain for decades - without success? To understand the sheer importance of what Blue Brain is, it is helpful to understand, first, what it is not. Dr Markram is not trying to build the kind of clanking robot servant beloved of countless sci-fi movies. Real robots may be able to walk and talk and are based around computers that are 'taught' to behave like humans, but they are, in the end, no more intelligent than dishwashers. Markram dismisses these toys as 'archaic'. The professor is not mad, but he is unsettling Instead, Markram is building what he hopes will be a real person, or at least the most important and complex part of a real person - its mind. And so instead of trying to copy what a brain does, by teaching a computer to play chess, climb stairs and so on, he has started at the bottom, with the biological brain itself. Our brains are full of nerve cells called neurons, which communicate with one another using minuscule electrical impulses. The project literally takes apart actual brains cell by cell, using what amounts to extremely intricate dissecting techniques, analyses the billions of connections between the cells, and then plots these connections into a computer. The upshot is, in effect, a blueprint or carbon copy of a brain, rendered in software rather than flesh and blood. The idea is that by building a model of a real brain, it might - just might - begin to behave like the real thing. To demonstrate how he is achieving this, Markram shows me a machine that resembles an infernal torture engine; a wheel about 2ft across with a dozen ultra-fine glass 'spokes' aimed at the centre. It is here that tiny slivers of rat brain are dissected, using tools finer than a human hair. Their interconnections are then mapped and turned into computer code. ?Professor Markram is adamant the experiment will not result in a stereotypical Frankenstein, like the one seen here in 1970 film The Horror of Frankenstein Professor Markram is adamant the experiment will not result in a stereotypical Frankenstein, like the one seen here in 1970 film The Horror of Frankenstein A bucket full of slop lies next to the gleaming high-techery. That's where the bits of old rat brain go - a gruesome reminder that amid this is a project based upon flesh and blood. So far, Markram's supercomputer - an IBM Blue Gene - is able, using the information gleaned from the slivers of real brain tissue, to simulate the workings of about 10,000 neurones, amounting to a single rat's 'neocortical column' - the part of a brain believed to be the centre of conscious thought. That, says Markram, is the hard part. To go further, he is going to need a bigger computer. Using just 30 watts of electricity - enough to power a dim light bulb - our brains can outperform by a factor of a million or more even the mighty Blue Gene computer. But replicating a whole real brain is 'entirely impossible today', Markram says. Even the next stage - a complete rat brain - needs a ?200million, vastly more efficient supercomputer. Then what? 'We need a billion-dollar machine, custom-built. That could do a human brain.' But computing power is increasing exponentially and it is only a matter of time before suitable hardware is available. 'We will get there,' says Markram confidently. In fact, he believes that he will have a computer sufficiently powerful to deal with all the data and simulate a human brain before the end of this decade. The result? Perhaps a mind, a conscious, sentient being, able to learn and make autonomous decisions. It is a startling possibility. When faced with such extraordinary claims, one must first ask the question: 'Is he mad?' I have met several scientists who maintain they can change the world: men (they are always men) who say they can build a time machine or a starship, cure cancer or old age. A glittering machine brain, perhaps many times more intelligent than our own, carries, perhaps, even more potential for evil, as well as good Men who believe telepathy is real, or that Earth has been visited by aliens or, indeed, or who claim they are on the verge of creating artificial minds. Most of these men are deluded. Markram is not mad, but he is certainly unsettling. He comes across like a combination of Victorian gentleman scientist and New Age guru. 'You have to understand physics, the structure of the universe and philosophy,' he says, theatrically. He talks about humans 'not reaching their potential', and of his conviction that more of us have the capacity for genius than we think. He believes his artificial mind could show us how to exploit the untapped potential in our own minds. If we create a being more intelligent than us, maybe it could teach us how to catch up. The best evidence that Markram is not crazy is that he gets his hands dirty. He knows his way around his machines and knows one end of a brain cell from another. The principles underlying his work are firmly rooted in the scientific mainstream. ?Professor Markram is hoping the artificial brain he will create could be used for medical research, but concedes this could cause ethical problems Professor Markram is hoping the artificial brain he will create could be used for medical research, but concedes this could cause ethical problems He believes that the deepest and most fundamental properties of being human - thoughts, emotions, the mysterious feeling of self-awareness - arise from trillions of electrochemical interactions that take place in the lump of grey jelly in our heads. He believes there is no mysterious 'soul' that gives rise to the feeling of self. On the contrary, he insists that this results from physical processes inside our skulls. Of course, consciousness is one of the deepest scientific mysteries. How do millions of tiny electrical impulses in our heads give rise to the feeling of self, of pain, of love? No one knows. But if Markram is right, this doesn't matter. He believes that consciousness is probably something that simply 'emerges' given a sufficient degree of organised complexity. Imagine it this way: think of the marvellous patterns that emerge when a flock of starlings swoops in unison at dusk. Thousands of birds are interacting to create a shape that resembles a single unified entity with a life of its own. Markram believes that this is how consciousness might emerge - from billions of separate brain cells combining to create a single sentient mind. But what of the problems such an invention could generate? What if the machine makes demands? What if it begs you not to turn it off, or leave it alone at night? 'Maybe you will have to treat it like a child. Sometimes I will have to say to my child: "I have to go, sorry," ' he explains. Indeed, the artificial brain would throw up a host of moral issues. Could you really use an artificial mind, which behaves like a real mind, to perform experiments oa human mind. Dr David Lester, one of the project's lead scientists, says that they are effectively in a race with Markram, a race they will have to win with cunning rather than cash. 'We've got ?4million,' Lester says. 'Blue Brain has serious funding from the Swiss government and IBM. Henry Markram is to be taken seriously.' 'The process of building this is going to change society. We will have ethical problems that are unimaginable to us' Manchester is hoping it is possible to simplify key elements of the brain and thus dramatically reduce the computation power needed to replicate them. Others doubt Markham can ever succeed. Imperial College professor Igor Aleksander claims that while Markram can build a copy of a human brain, it will be 'like an empty bucket', incapable of consciousness. And, as Dr Lester points out, 'a newly minted real human brain can't do very much except lie on the floor and gurgle'. Indeed, Professor Markram may end up creating the world's most expensive baby. But if Markram turns his machine on in 2018, and it utters the famous declaration that underpins Western philosophy, 'I think, therefore I am', he will have confounded his critics. And his ambition is by no means impossible. In the past year, models of a rat brain produced totally unexpected 'brainwave patterns' in the computer software. Is it possible that, for a few seconds maybe, a fleeting rat-like consciousness emerged? 'Perhaps,' Markram says. It is not much, but if a rat, then why not a man? During my meeting I tried to avoid bringing up the name of the most famous (fictional) creator of artificial life, on the grounds of taste. But in the end, I had to mention him. 'Yes, well, Dr Frankenstein. People have made that point,' Markram says with a thin smile. Frankenstein's experiment, of course, went rather horribly wrong. And that was one man, with his monster made from bits of old corpse. A glittering machine brain, perhaps many times more intelligent than our own and created by one of the best-equipped laboratories in the world, carries, perhaps, even more potential for evil, as well as good. _______________________________________________ neuro mailing list neuro at postbiota.org http://postbiota.org/mailman/listinfo/neuro -- - Bryan http://heybryan.org/ 1 512 203 0507 From thespike at satx.rr.com Thu Jan 7 17:34:29 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 11:34:29 -0600 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <79471C131D7F4EE28EE05A725ED29AED@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> Message-ID: <4B461B25.6070405@satx.rr.com> On 1/7/2010 10:33 AM, spike wrote: > John I can assure you that the fucking phenomenon exists. But what has that > to do with Psi? I don't see how the two are related in any way. He's getting confused with the heavy Sigh phenomenon. Damien Broderick From aware at awareresearch.com Thu Jan 7 17:54:34 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 09:54:34 -0800 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> Message-ID: 2010/1/7 John Clark : > On Jan 6, 2010, at 1:20 PM, x at extropica.org wrote: > Me: > >>> we learned from the history of Evolution that consciousness is easy but >>> intelligence is hard. > >> So why don't you agree with me that intelligence must have "existed" >> (been recognizable, if there had been an observer) for quite a long >> time > > Because we learned from the history of Evolution that consciousness is easy > but > intelligence is hard. Well, that response clearly adds nothing to the discussion, and you stripped out my supporting text. >> before evolutionary processes stumbled upon the additional, >> supervisory, hack of self-awareness > > What you just said is logically absurd. Really? Given your experience of my thinking over several years together on this list, do you think it's more likely that I'm simply a generator of logical absurdities, or is it more likely that you don't understand the basis of my statement? [I note that you're not asking for any clarification.] > If consciousness doesn't effect intelligence Do you mean literally "If consciousness doesn't produce intelligence" or do you mean "If consciousness doesn't affect intelligence"? If you mean literally the former, then it appears that you must harbor a mystical notion of "consciousness", that contributes to the somewhat "intelligent" behavior of the amoeba, despite its apparent lack of the neuronal apparatus necessary to support a sense of self. I know John Clark doesn't tolerate mysticism, and I know Damien has already your use of the word "effect", so I can only guess that you mean "consciousness" in a way that doesn't require much, if any, hardware support. [I'll note here that in addition to stripping out substantial portions of my supporting text, you've also eliminated my careful definitions of what I meant when I used the words "consciousness" and "intelligence."] > then there is no way Evolution could have "stumbled upon" the > trick of generating consciousness It may be relevant that the way evolution (It's not clear why you would capitalize that word) works is always in terms of blind, stumbling, random variation. Of course genetic variation is strongly constrained, and phenotypic variation is strongly facilitated by preexisting structures. > because it would convey no more adaptive > advantage than eyes or pigment does for creatures that live all their life > in dark caves. There appears to be such a strong disconnect here that I suspect we're not even talking about the same things. It seems obvious that given a particular degree of adaptation of an organism to its environment, then to the extent the organism's fitness would be enhanced by the ability to model possible variations on itself acting within its environment, especially if this facilitates cooperative behaviors with others similar to itself, such "adaptive advantage" would tend to be selected. What do YOU mean? > In short if even one conscious being exists on Planet Earth > and if Evolution is true then the Turing Test works; Huh? If there were only one conscious being, then wouldn't that have to be the one judging the Turing Test? And if there is no other conscious being, how could any (non-conscious by definition) subject pass the test (such that the TT would be shown to "work"? > and if the Turing Test > doesn't work then neither does Evolution. Huh?? >> novel hacks like self-awareness discovered at some point, exploited >> for the additional fitness they confer > > Fine, if true and consciousness aids fitness then it can be deduced from > behavior. Well, not "deduced" but certainly inferred... > Either you can have intelligence without consciousness or you can > not. The propositions I'm going to assume, since you emphasize "Evolution", that your propositions should be stated in terms of "evolved organisms", and not in terms of the more general "systems that display behavior assessed as intelligent." So, (A) Evolved organisms can be correctly assessed as displaying intelligence but without consciousness. (B) Evolved organisms can be correctly assessed as displaying intelligence along with consciousness. > lead to mutually contradictory conclusions, they can't > both be right and you can't claim both as your own. You've got to make a > choice. Why? It seems to me that we observe the existence of both classes of evolved organisms. As I've said before, a wide range of organisms can display behaviors expressing a significant degree of effective prediction and control in regard to their environment of adaptation. And the addition of a layer of supervisory self-awareness can be a beneficial add-on for more advanced environments of interaction. I'm guessing that our disagreement here comes down to different usage and meaning of the terms "intelligence" and "consciousness" and it might be significant that you stripped out all evidence and results of my efforts to effectively define them. You seem not to play fair, so it's not much fun. - Jef From ismirth at gmail.com Thu Jan 7 17:55:48 2010 From: ismirth at gmail.com (Isabelle Hakala) Date: Thu, 7 Jan 2010 12:55:48 -0500 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <4B461B25.6070405@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> Message-ID: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> First off, what was the impetus for this topic being started? Secondly, to the person who started it, how do you define psi (so that we may have a common language for the discussion)? What parts of the definition are you refuting exist? It seems that this post must have spun off from some other discussion but the relevant parts were not copied over. I am a scientist, and I have had many things happen that I would consider to qualify as 'psi'. I have simply *known* when someone was in a car accident. I have just *known* when someone died. I have also just *known* the exact moment someone read an email from me. These things have not happened consistently as an adult but as a child I always knew when the phone was about to ring, and who was on it. Maybe none of those were strong enough to be considered psi, but if there is even an smidgen of something that could be psi, then there are far more things that could be possible. Please try to keep these discussions civil, as we want to encourage people to share their opinions without feeling attacked by others, otherwise we will not have a diversity of opinions, which is needed to stretch our capacity for reasoning. -Isabelle ~~~~~~~~~~~~~~~~~~~~~~~ Isabelle Hakala "Any person who says 'it can't be done' shouldn't be interrupting the people getting it done." "Do every single thing in life with love in your heart." On Thu, Jan 7, 2010 at 12:34 PM, Damien Broderick wrote: > On 1/7/2010 10:33 AM, spike wrote: > > John I can assure you that the fucking phenomenon exists. But what has >> that >> to do with Psi? I don't see how the two are related in any way. >> > > He's getting confused with the heavy Sigh phenomenon. > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cetico.iconoclasta at gmail.com Thu Jan 7 18:25:36 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Thu, 7 Jan 2010 16:25:36 -0200 Subject: [ExI] Psi (no need to read this post you already know whatitsays ) References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> Isabelle Hakala >>I am a scientist, and I have had many things happen that I would consider to qualify as 'psi'. I have simply *known* when someone >was in a car accident. I have just *known* when someone died. I have also >just *known* the exact moment someone read an email from me. These >things have not happened consistently as an adult but as a child I always >knew when the phone was about to ring, and who was on it. Maybe none of >those were strong enough to be considered psi, but if there is even an >smidgen of something that could be psi, then there are far more things that >could be possible. Well, can you tell which bone I broke last year and what caused it? From nanite1018 at gmail.com Thu Jan 7 19:52:22 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Thu, 7 Jan 2010 14:52:22 -0500 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <2631098A-CF5B-4898-8208-7380C4C8D867@GMAIL.COM> > I am a scientist, and I have had many things happen that I would > consider to qualify as 'psi'. I have simply *known* when someone was > in a car accident. I have just *known* when someone died. I have > also just *known* the exact moment someone read an email from me. > These things have not happened consistently as an adult but as a > child I always knew when the phone was about to ring, and who was on > it. Maybe none of those were strong enough to be considered psi, but > if there is even an smidgen of something that could be psi, then > there are far more things that could be possible. > -Isabelle You did not "always" know these things. As a scientist you should be more careful of bias (as a child, you almost certainly weren't careful to protect against such things). You likely sometimes had a random "feeling" and when it happened you remembered, when something didn't happen, you forgot. Lucky guesses once in a great while about stuff like when people read emails or when someone was in a car wreck (you may have actually retroactively attributed the wreck as the cause of your feeling, when in fact it was something different). There is no evidence that this stuff exists, at least not anything statistically significant. It is surprising to me that many otherwise perfectly rational people buy into the nonsense one finds in the new "metaphysical studies" section of Borders. Joshua Job nanite1018 at gmail.com From ismirth at gmail.com Thu Jan 7 20:01:07 2010 From: ismirth at gmail.com (Isabelle Hakala) Date: Thu, 7 Jan 2010 15:01:07 -0500 Subject: [ExI] Psi (no need to read this post you already know whatitsays ) In-Reply-To: <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> Message-ID: <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> No, not really, it is more passive than that. I don't really have any control over it happening. I had an image of your leg, below your knee, come to mind, and I also had a tree come to mind, but I wouldn't know what the heck that means. With the car accident I suddenly felt what felt like would be "Michael's fear of being in a car accident" (it was Michael in the car accident). I immediately called his cell, and he didn't answer so I drove to his house. He showed up an hour later and he said that about 30 seconds after the accident he heard his cell ringing, but it was trapped under the seat where he couldn't get to it to answer or call me back. Also, with the phone calls, I would regularly yell to my mom to answer the phone, and yell who it was calling, and *then* the phone would ring. This really freaked my mom out and she finally asked me to stop doing it. This was in the last 70's and early 80's. The calls were random and one of the calls was from someone my mother hadn't spoken to in many years, and yet I said who it was before the phone rang, and was correct. When I said someone was calling, and who it was, the phone always rang right away, and the person was always correct. After my mother asked me to stop I couldn't do it anymore. And as a scientist, I think that people need to realize that just because we don't understand something or have proof of it, does NOT mean it doesn't exist. Before we knew what molecules were, or how to see them or detect them, there still were molecules, and anyone would have thought you crazy if you tried to convince them that molecules existed. ~~~~~~~~~~~~~~~~~~~~~~~ Isabelle Hakala "Any person who says 'it can't be done' shouldn't be interrupting the people getting it done." "Do every single thing in life with love in your heart." On Thu, Jan 7, 2010 at 1:25 PM, Henrique Moraes Machado (CI) < cetico.iconoclasta at gmail.com> wrote: > Isabelle Hakala >>I am a scientist, and I have had many things happen that > I would consider to qualify as 'psi'. I have simply *known* when someone > > was in a car accident. I have just *known* when someone died. I have also >> just *known* the exact moment someone read an email from me. These >> things have not happened consistently as an adult but as a child I always >> knew when the phone was about to ring, and who was on it. Maybe none of >> those were strong enough to be considered psi, but if there is even an >> smidgen of something that could be psi, then there are far more things that >> could be possible. >> > > > > Well, can you tell which bone I broke last year and what caused it? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Jan 7 20:11:45 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 7 Jan 2010 21:11:45 +0100 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <580930c21001071211i44e07cb8kca3621d5a4769035@mail.gmail.com> 2010/1/7 Isabelle Hakala : > I am a scientist, and I have had many things happen that I would consider to > qualify as 'psi'. I have simply *known* when someone was in a car accident. > I have just *known* when someone died. I have also just *known* the exact > moment someone read an email from me. These things have not happened > consistently as an adult but as a child I always knew when the phone was > about to ring, and who was on it. Maybe none of those were strong enough to > be considered psi, but if there is even an smidgen of something that could > be psi, then there are far more things that could be possible. Mmhhh. For me, "psi" is simply the fact that we appear to guess the right card, in experiments that can be repeated at will, infinitesimally more often than we statistically should. -- Stefano Vaj From ismirth at gmail.com Thu Jan 7 20:23:42 2010 From: ismirth at gmail.com (Isabelle Hakala) Date: Thu, 7 Jan 2010 15:23:42 -0500 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <580930c21001071211i44e07cb8kca3621d5a4769035@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <580930c21001071211i44e07cb8kca3621d5a4769035@mail.gmail.com> Message-ID: <398dca511001071223q531fc56bj7d9f8f87c08c1786@mail.gmail.com> Also, lots of people as children would be out playing somewhere and then just *know* that they were in trouble, and not even have a clue as to *why*, but just run straight home, and find their parent waiting on the doorstep for them. This happened to me a couple of times, and to several of my friends through my teenage years. I don't think it qualifies for anything that someone can test, but there are circumstances that convince people that we have more abilities than just the obvious ones. ~~~~~~~~~~~~~~~~~~~~~~~ Isabelle Hakala "Any person who says 'it can't be done' shouldn't be interrupting the people getting it done." "Do every single thing in life with love in your heart." On Thu, Jan 7, 2010 at 3:11 PM, Stefano Vaj wrote: > 2010/1/7 Isabelle Hakala : > > I am a scientist, and I have had many things happen that I would consider > to > > qualify as 'psi'. I have simply *known* when someone was in a car > accident. > > I have just *known* when someone died. I have also just *known* the exact > > moment someone read an email from me. These things have not happened > > consistently as an adult but as a child I always knew when the phone was > > about to ring, and who was on it. Maybe none of those were strong enough > to > > be considered psi, but if there is even an smidgen of something that > could > > be psi, then there are far more things that could be possible. > > Mmhhh. For me, "psi" is simply the fact that we appear to guess the > right card, in experiments that can be repeated at will, > infinitesimally more often than we statistically should. > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Jan 7 20:25:34 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 7 Jan 2010 21:25:34 +0100 Subject: [ExI] effect/affect again In-Reply-To: <014A7D44-C323-482C-BF40-3537A46F37BB@bellsouth.net> References: <20100104183358.OUQ02.149499.root@hrndva-web14-z02> <4B4236C0.4040307@satx.rr.com> <9AB494A79BAC4691AC83EC32CCDF5D0D@spike> <014A7D44-C323-482C-BF40-3537A46F37BB@bellsouth.net> Message-ID: <580930c21001071225g726be589md2d6e06f847431fc@mail.gmail.com> Strange how if you are Neolatin mother tongue all that does not sound far-fetched in the least... ;-) 2010/1/4 John Clark : > On Jan 4, 2010, ?spike wrote: > Fortunately, Damien is an affable character, even if at times ineffable. > > And redoubtable too when he wasn't being inscrutable. -- Stefano Vaj From aware at awareresearch.com Thu Jan 7 21:11:30 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 13:11:30 -0800 Subject: [ExI] Paper: Redundancy in Systems Which Entertain a Model of Themselves: Interaction Information and the Self-Organization of Anticipation Message-ID: Redundancy in Systems Which Entertain a Model of Themselves: Interaction Information and the Self-Organization of Anticipation An technical paper published 2010-01-06 showing progress in areas of particular interest to me. Related to the problem of quantification of coherence over a context of mutual interaction information as well as to Ulanowicz's notion of using a local reduction in uncertainty based on mutual information among three or more dimensions as an indicator of "ascendency." Heady stuff. This paper might be seen as somewhat related to the issue of Searle's Chinese Room since it does address an approach to quantififying the **observer-dependent** meaningfulness of mutual interaction information within a system, but it has no reason to say anything about the epistemological context of that debate. [Lizbeth said I should try using different words so people wouldn't tend to tune me out. Fortunately these words from Loet Leydesdorff arrived just in time to do the job.] - Jef From thespike at satx.rr.com Thu Jan 7 21:15:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 15:15:27 -0600 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <4B464EEF.1000200@satx.rr.com> On 1/7/2010 11:55 AM, Isabelle Hakala wrote: > Please try to keep these discussions civil, as we want to encourage > people to share their opinions without feeling attacked by others, > otherwise we will not have a diversity of opinions, which is needed to > stretch our capacity for reasoning. That's unlikely, because the topic (for reasons probably having to do with immunization reactions to religion) is toxic to many otherwise open-minded people here, so they react with vehemence and without any appeal to evidence. Some refuse even to consider evidence when it's provided (John Clark, say, who proudly declares that he won't look at anything pretending to be evidence for psi, since he knows a priori that it's BULLSHIT!!!). Anyone interested in my pro-psi opinion will have to read my 350pp book on the topic, OUTSIDE THE GATES OF SCIENCE; I'm tired to repeating in bite-sized chunks what I've already spent a lot of effort writing carefully. (For an opinion of the book and the topic by one very bright and open-minded sometime ExIchat poster, consult Ben Goertzel's review on the amazon site.) Damien Broderick From spike66 at att.net Thu Jan 7 21:22:00 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 13:22:00 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> Message-ID: <6405055B9B8D4FF2AEFD28B62B2C8223@spike> ...On Behalf Of Isabelle Hakala ... Subject: Re: [ExI] Psi (no need to read this post you already knowwhatitsays ) > ...he said that about 30 seconds after the accident he heard his cell ringing, but it was trapped under the seat where he couldn't get to it to answer or call me back... Isabelle, I know how to explain this. Michael had a premonition that you were going to call him, to tell him you had a premonition he had been in an accident. He was fumbling around looking for his cell phone instead of watching where he was going and BOOM. That accident caused your premonition (of which he had already had a premonition) and called him, but by then it was too late. I am having a strange feeling or vision about you. It involves a computer and a chair, but no broken bones or trees. I don't know what it means. > ...Also, with the phone calls, I would regularly yell to my mom to answer the phone, and yell who it was calling, and *then* the phone would ring. This really freaked my mom out and she finally asked me to stop doing it. This was in the last 70's and early 80's... Perhaps your young ears were able to detect the ultra high frequency sound that the 70s era telephone electromechanical devices would make a couple of seconds before the phone rang. Recall those things had a capacitor in them, which had to charge, and the discharge cycle would cause the ring to be the usual intermittent signal. The only reason I know about this is that back in the old days when long distance phone calls cost a lot of money, people would regularly signal each other by prearranging to call at a certain time; the number of rings would be translated to a message. An example is the signal agreed upon in my own misspent youth regarding the approaching redcoats: one if by land, two if by sea. Of course if no one answered, the call was free. The phone company figured it out and responded by de-synchronizing what the caller heard and what the called phone did so that they didn't necessarily agree anymore. Isabelle, young people such as yourself perhaps do not recall the days when ripping off the phone company was great nerd entertainment. Apple computer was started by a bunch of geeks who chose ripping off the phone company over the usual high school preoccupation, attempting to effect(v) recreational copulation, commonly known as the fucking phenomenon. > ...The calls were random and one of the calls was from someone my mother hadn't spoken to in many years, and yet I said who it was before the phone rang, and was correct. When I said someone was calling, and who it was, the phone always rang right away, and the person was always correct. After my mother asked me to stop I couldn't do it anymore... Just a guess, but I will offer an explanation for why I could never do the feat you describe. I had a premonition that my mother ask me to cut the crap with the whole anticipating phone calls phenomenon because it was freaking her beak, and so I could not do it anymore before I actually ever could do it to start with. It was a preemptive attack on my premonitions. As a closing comment on this topic, a weird thing happened to me the other day. I had a strange feeling that nothing would happen. Suddenly and without warning, nothing happened. A minute passed. I looked at my watch. It was a minute past. It is so weird, I can't explain it. spike (Isabelle, you are new here. A warm extropian welcome to you my friend. I am well known in these parts for posting this kind of silliness.) From aware at awareresearch.com Thu Jan 7 21:51:07 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 13:51:07 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Thu, Jan 7, 2010 at 8:28 AM, Aware wrote: > On Thu, Jan 7, 2010 at 6:40 AM, Stathis Papaioannou wrote: >> I agree with Hofstdter's, and your, epiphenomenalism. > > But it's not most people's idea of epiphenomenalism, where the > "consciousness" they know automagically emerges from a system of > sufficient complexity and configuration. ?Rather, its an > epistemological understanding of the (recursive) relationship between > the observer and the observed. Relevant to this discussion is an article in New Scientist, just today: Note all the angry righteous commenters, defending Science against this affront to reductionist materialism. Note too, if you can, that they didn't understand the content that they attack. Only after more than twenty angry comments, someone posted the following: "Sorry, I'm not the best at explaining these things, but read up on phenomenology or qualitative research's epistemology and you should see the thrust of his argument. And to the person who argued that we can't understand a computer according to this argument, that was a bit of a straw man fallacy. The computer, both as the machine and the appearances on the screen are objects or phenomena to be observed, not an observer. Unless the computer is trying to address it's own ontology, it is an observed object being observed by an outside subject making it still under the usual rules of quantitative epistemology. It is when you try to observe the observation of the object that things would get complicated. I can't say whether or not he's correct, but I think it is a useful critique on the epistemology of neuroscience." - Jef - Jef From aware at awareresearch.com Thu Jan 7 22:04:57 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 14:04:57 -0800 Subject: [ExI] Telephone hacking Message-ID: On Thu, Jan 7, 2010 at 1:22 PM, spike wrote: > Recall those things had a capacitor in them, > which had to charge, and the discharge cycle would cause the ring to be the > usual intermittent signal. ?The only reason I know about this is that back > in the old days when long distance phone calls cost a lot of money, people > would regularly signal each other by prearranging to call at a certain time; > the number of rings would be translated to a message. ?An example is the > signal agreed upon in my own misspent youth regarding the approaching > redcoats: one if by land, two if by sea. Reminds me of one of the things I did in MY misspent youth: It turns out that while the phone was ringing, even though it hadn't been answered (picked up) there was already an audio connection available through the circuit. I had a friend about 30 miles away (long distance charges) but we were able to communicate by voice--better than counting number of rings--while the phone was ringing, by using audio amplifiers (essentially an intercom) with capacitive coupling to block the 45VDC and diode clipping to limit the 90VAC ring signal. Oh yeah, I was a wild electronics experimenter in my youth... - Jef From thespike at satx.rr.com Thu Jan 7 22:06:47 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 16:06:47 -0600 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: <4B465AF7.1030709@satx.rr.com> On 1/7/2010 3:51 PM, Aware quoth: > It is when you try to observe the observation of the > object that things would get complicated. I can't say whether or not > he's correct, but I think it is a useful critique on the epistemology > of neuroscience." > > - Jef > > - Jef Is that meta-Jef observing the observation of Jef? From aware at awareresearch.com Thu Jan 7 22:12:46 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 14:12:46 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <4B465AF7.1030709@satx.rr.com> References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <4B465AF7.1030709@satx.rr.com> Message-ID: On Thu, Jan 7, 2010 at 2:06 PM, Damien Broderick wrote: > On 1/7/2010 3:51 PM, Aware quoth: > >> ?It is when you try to observe the observation of the >> object that things would get complicated. I can't say whether or not >> he's correct, but I think it is a useful critique on the epistemology >> of neuroscience." >> >> - Jef >> >> - Jef Hehe. Yes, a bad habit but sometimes good for catching Jef just before he does something impulsive. - Jef From stathisp at gmail.com Thu Jan 7 22:13:49 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 09:13:49 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <446942.27612.qm@web36504.mail.mud.yahoo.com> References: <446942.27612.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/8 Gordon Swobe : > --- On Thu, 1/7/10, Stathis Papaioannou wrote: > >> There *must* be something uncomputable about the behaviour of neurons... > > No. (Of course I don't claim that there must be something uncomputable about neurons, it's only if, as you seem to be saying, p-neurons are impossible that there must be something uncomputable about neurons.) >>... if it can't be copied well enough to make p-neurons, >> artificial neurons which behave exactly like b-neurons but lack the >> essential ingredient for consciousness. This isn't a contingent fact, >> it's a logical requirement. > > Yes and now you see why I claim Cram's surgeon must go in repeatedly to patch the software until his patient passes the Turing test: because the patient has no experience, the surgeon must keep working to meet your logical requirements. The surgeon finally gets it right with Service Pack 9076. Too bad his patient can't know it. The surgeon will be rightly annoyed if the tweaking and patching has not been done at the factory so that the p-neurons just work. -- Stathis Papaioannou From stathisp at gmail.com Thu Jan 7 22:18:06 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 09:18:06 +1100 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <187B6A0A-A8FD-42EB-BC18-A2178641FC72@bellsouth.net> References: <527984.77632.qm@web36507.mail.mud.yahoo.com> <187B6A0A-A8FD-42EB-BC18-A2178641FC72@bellsouth.net> Message-ID: 2010/1/8 John Clark : > On Jan 7, 2010, Gordon Swobe wrote: > > If we knew exactly what physical conditions must exist in the brain for > consciousness to exist, i.e., if we knew everything about the NCC, > > This NCC of yours is gibberish. You state very specifically that it is not > the signals between neurons that produce consciousness, so how can some sort > of magical awareness inside the neuron correlate with anything? You must > have 100 billion independent conscious entities inside your head. The NCC is either gibberish or something trivially obvious, like oxygen, since without it neurons wouldn't work and you would lose consciousness. -- Stathis Papaioannou From steinberg.will at gmail.com Thu Jan 7 22:42:16 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 7 Jan 2010 17:42:16 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <6405055B9B8D4FF2AEFD28B62B2C8223@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> Message-ID: <4e3a29501001071442r3a67071duaedf6deaf9d5fbd3@mail.gmail.com> The "psi" I speak of refers to (and my delineating of this certainly does not mean I am anything but a curious skeptic) any unexplained cognitive phenonema which are predictive or observational like those that I have listed, some, like telepathy, being favored over sillier ones. I know it is trite to bring up the whole "nobody believed regular physics, and regular physics was right; nobody believed quantum physics, and quantum physics was right" thing, but honestly, a mind too closed to at least prove why they feel these things are impossible may not be expressing a truly extropian mindset. It would seem beneficial to at least humor the idea, given that its existence would mean a revelation in the intellectual community. I am sure that all of you in the intellectual elect would be able to devise theories or experiments to prove or disprove. It's hard not to think "How could so many intelligent people think this has value without it actually having value?", but then I think maybe it's just some residual, romantic, magic-world security blanket stuck in my brain, though I am a deterministic nihilist like most of you so I would hope that is not the case. I'm not sure about this, but since when has that ever been an excuse for abandoning intellectual pursuit? You have plenty of time to speculate while waiting for the thread on Dyson shells to update, though I would imagine considerably less time than waiting for the *existence* of Dyson shells to update. So--use some statistics and show why it's theoretically impossible, for science's sake. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Thu Jan 7 22:47:15 2010 From: scerir at libero.it (scerir) Date: Thu, 7 Jan 2010 23:47:15 +0100 (CET) Subject: [ExI] Psi (no need to read this post you already know what it says) Message-ID: <19052174.225601262904435226.JavaMail.defaultUser@defaultHost> > Mmhhh. For me, "psi" is simply the fact that we appear to guess the > right card, in experiments that can be repeated at will, > infinitesimally more often than we statistically should. > Stefano Vaj hey, there are amazing experiments here http://www.parapsych.org/online_psi_experiments.html http://www.fourmilab.ch/rpkp/experiments/ From stathisp at gmail.com Thu Jan 7 22:54:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jan 2010 09:54:38 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/8 Aware : > Your insistence that it is this simple is prolonging the cycling of > that "strange loop" you're in with Gordon. ?It's not always clear what > Gordon's argument IS--often he seems to be parroting positions he > finds on the Internet--but to the extent he is arguing for Searle, he > is not arguing against functionalism. Searle is explicitly opposed to functionalism. He allows that *some* machine that reproduces the function of the brain would reproduce consciousness but not that *any* machine would do so: computers, beer cans and toilet paper and the CR wouldn't cut it, for example. > Given functionalism, and the "indisputable 1st person evidence" of the > existence of consciousness/qualia/meaning/intensionality within the > system ("where else could it be?"), he points out quite correctly that > no matter how closely one looks, no matter how subtle one's formal > description might be, there's syntax but no semantics in the system. > > So I suggest (again) to you and Gordon, and Searle. that you need to > broaden your context. ?That there is no essential consciousness in the > system, but in the recursive relation between the observer and the > observed. Even (or especially) when the observer and observed are > functions of he same brain, you get self-awareness entailing the > reported experience of consciousness, which is just as good because > it's all you ever really had. Isn't the relationship between the observer and observed a function of the observer-observed system? >> It's not immediately obvious that this is a silly idea, >> and a majority of people probably believe it. > > Your faith in functionalism is certainly a step up from the > assumptions of the silly masses. ?But everyone in this discussion, and > most denizens of the Extropy list, already get this. > > >> ?However, it can be shown >> to be internally inconsistent, and without invoking any assumptions >> other than that consciousness is a naturalistic phenomenon. > > Yes, but that's not the crux of this disagreement. ?In fact, there is > no crux of this disagreement since to resolve it is not to show what's > wrong within, but to reframe it in terms of a larger context. Maybe, but it's also satisfying to show in a debate without introducing extraneous ideas that the premises your opponent presents you with lead to inconsistency. > Searle and Gordon aren't saying that machine consciousness isn't > possible. ?If you pay attention you'll see that once in a while > they'll come right out and say this, at which point you think they've > expressed an inconsistency. ?They're saying that even though it's > obvious that some machines (e.g. humans) do have consciousness, it's > also clear that no formal system implements semantics. ?And they're > correct. What about this idea: there is no such thing as semantics, really. It's all just syntax. -- Stathis Papaioannou From aware at awareresearch.com Fri Jan 8 00:05:04 2010 From: aware at awareresearch.com (Aware) Date: Thu, 7 Jan 2010 16:05:04 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Thu, Jan 7, 2010 at 2:54 PM, Stathis Papaioannou wrote: > 2010/1/8 Aware : > > Searle is explicitly opposed to functionalism. I admit it's been about 30 years since read Searle's stuff and related commentary in detail, but I think I've kept a clear understanding of where he went wrong--and of which I have yet to see evidence of your understanding. [That sounds a little harsh, doesn't it? (INTJ)] Seems to me that Searle accepts functionalism, but never makes it explicit, possibly for the sake of promoting argument over his beloved point. Seems to me that nearly everyone reacts with some disdain to his apparent affront to functionalism, and then proceeds to argue entirely on that basis. But if you watch carefully, he accepts functionalism, IFF the candidate machine/substrate actually reproduces the function of the brain. But then he goes on to show that for any formal description of any machine, there's no place IN THE MACHINE where understanding actually occurs. He's right about that. But here he goes wrong: He claims that human brains obviously do have understanding, and suggests that he has therefore proved that there is something different about attempts to produce the same in machines. But there's no understanding in the human brain, either, nor any evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER. We don't have understanding in our brains, but we don't need it. Never did. We have only actions, which appear (with good reason) to be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE ACTOR ITSELF. Sure it's non-intuitive. It's Zen. In the true, non-bastardized sense of the word. And if you're gonna design an AI that displays consciousness, then it would be helpful to understand this so you don't spin your wheels trying to figure out how to implement it. >> So I suggest (again) to you and Gordon, and Searle. that you need to >> broaden your context. ?That there is no essential consciousness in the >> system, but in the recursive relation between the observer and the >> observed. Even (or especially) when the observer and observed are >> functions of he same brain, you get self-awareness entailing the >> reported experience of consciousness, which is just as good because >> it's all you ever really had. > > Isn't the relationship between the observer and observed a function of > the observer-observed system? No. The system that is being observed has no place in it where meaning/semantics/qualia/intentionality can be said to exist. If you look closely all you will find is components in a chain of cause and effect. Syntax but no semantics, as Gordon pointed out early on in this discussion. But an observer, at whatever level of recursion, will report meaning in its terms. It may help to consider this: If I ask you (or you ask yourself (Don't worry; it's recursive)) about the redness of an apple that you are seeing, that "experience" never occurs in real-time. It's always only a product of some processing that necessarily takes some time. Real-time experience never happens; it's a logical and practical impossibility, So in any case, the information corresponding to the redness of that apple, its luminance, its saturation, its flaws, its associations with the remembered red of a fire truck, and on and on, is in effect delivered or made available, after some delay, to another system. And that system will do whatever it is that it will do, determined by its nature within that context. In the case of delivery to the system (observer) that is going to find out about that red, then the observer system will then do something with that information (again completely determined by its nature, with that context.) The observer system might remark out loud about the redness of the apple, and remember doing so. It may say nothing, and only store the new perception (of perceiving) the redness. A moment later it may use that perception (from memory) again, of course linked with newly delivered information as well. If at any point the nature of the observer (within context, which might be me asking you what you experienced) focuses attention again on information about its internal state, the process repeats, keeping the observer process pretty well satisfied. From a third-person point of view, there was never any meaning anywhere in the system, including within the observer we just described. But if you ask the observer about the experience, of course it will truthfully report in terms of first-person experience. What more is there to say? > What about this idea: there is no such thing as semantics, really. > It's all just syntax. Yes, well, it all depends on your context, which is what I've been saying all along. - Jef From spike66 at att.net Fri Jan 8 01:16:10 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 17:16:10 -0800 Subject: [ExI] golden ratio discovered in quantum world In-Reply-To: <6405055B9B8D4FF2AEFD28B62B2C8223@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM><79471C131D7F4EE28EE05A725ED29AED@spike><4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com><02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm><398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> Message-ID: <24B110CD0BA14840BA294A133E8051D9@spike> Since we are discussing weirdness, check this: http://www.physorg.com/news182095224.html Golden ratio discovered in a quantum world Researchers from the Helmholtz-Zentrum Berlin f?r Materialien und Energie (HZB, Germany), in cooperation with colleagues from Oxford and Bristol Universities, as well as the Rutherford Appleton Laboratory, UK, have for the first time observed a nanoscale symmetry hidden in solid state matter. They have measured the signatures of a symmetry showing the same attributes as the golden ratio famous from art and architecture. The research team is publishing these findings in Science on the 8 January. On the atomic scale particles do not behave as we know it in the macro-atomic world. New properties emerge which are the result of an effect known as the Heisenberg's Uncertainty Principle. In order to study these nanoscale quantum effects the researchers have focused on the magnetic material cobalt niobate. It consists of linked magnetic atoms, which form chains just like a very thin bar magnet, but only one atom wide and are a useful model for describing ferromagnetism on the nanoscale in solid state matter. When applying a magnetic field at right angles to an aligned spin the magnetic chain will transform into a new state called quantum critical, which can be thought of as a quantum version of a fractal pattern. Prof. Alan Tennant, the leader of the Berlin group, explains "The system reaches a quantum uncertain - or a Schr?dinger cat state. This is what we did in our experiments with cobalt niobate. We have tuned the system exactly in order to turn it quantum critical." By tuning the system and artificially introducing more quantum uncertainty the researchers observed that the chain of atoms acts like a nanoscale guitar string. Dr. Radu Coldea from Oxford University, who is the principal author of the paper and drove the international project from its inception a decade ago until the present, explains: "Here the tension comes from the interaction between spins causing them to magnetically resonate. For these interactions we found a series (scale) of resonant notes: The first two notes show a perfect relationship with each other. Their frequencies (pitch) are in the ratio of 1.618 , which is the golden ratio famous from art and architecture." Radu Coldea is convinced that this is no coincidence. "It reflects a beautiful property of the quantum system - a hidden symmetry. Actually quite a special one called E8 by mathematicians, and this is its first observation in a material", he explains. The observed resonant states in cobalt niobate are a dramatic laboratory illustration of the way in which mathematical theories developed for particle physics may find application in nanoscale science and ultimately in future technology. Prof. Tennant remarks on the perfect harmony found in quantum uncertainty instead of disorder. "Such discoveries are leading physicists to speculate that the quantum, atomic scale world may have its own underlying order. Similar surprises may await researchers in other materials in the quantum critical state." The researchers achieved these results by using a special probe - neutron scattering. It allows physicists to see the actual atomic scale vibrations of a system. Dr. Elisa Wheeler, who has worked at both Oxford University and Berlin on the project, explains "using neutron scattering gives us unrivalled insight into how different the quantum world can be from the every day". However, "the conflicting difficulties of a highly complex neutron experiment integrated with low temperature equipment and precision high field apparatus make this a very challenging undertaking indeed." In order to achieve success "in such challenging experiments under extreme conditions" the HZB in Berlin has brought together world leaders in this field. By combining the special expertise in Berlin whilst taking advantage of the pulsed neutrons at ISIS, near Oxford, permitted a perfect combination of measurements to be made. More information: Quantum Criticality in an Ising Chain: Experimental Evidence for Emergent E8 Symmetry. Article in Science, DOI:RE1180085/JEC/PHYSICS From thespike at satx.rr.com Fri Jan 8 01:56:32 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 19:56:32 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <6405055B9B8D4FF2AEFD28B62B2C8223@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> Message-ID: <4B4690D0.5000200@satx.rr.com> On 1/7/2010 3:22 PM, spike wrote: >> > ...Also, with the phone calls, I would regularly yell to my mom to >> >answer the phone, and yell who it was calling, and*then* the phone would >> > ring. This really freaked my mom out and she finally asked me to stop doing >> > it. This was in the last 70's and early 80's... > Perhaps your young ears were able to detect the ultra high frequency sound > that the 70s era telephone electromechanical devices would make a couple of > seconds before the phone rang. Recall those things had a capacitor in them, > which had to charge, and the discharge cycle would cause the ring to be the > usual intermittent signal. Yes, this sort of explanation is just the kind of thing real parapsychologists immediately look for when examining "natural experiments." And such explanations can sometimes be overlooked and only discovered later. But tell me, Spike, how does this account for controlled double blinded tests where you wait by the phone for a call from one of several randomized callers at a certain time, hear the ring, name the caller, then answer--all of this under observation. Pure chance result, right? What else can it be? But wait--Does the magic capacitor sound different when it's Aunt Jane or Uncle Bill? Maybe so--do tell. See the following, and deconstruct away: quote: < Abstract - Telepathy with the Nolan Sisters Journal of the Society for Psychical Research 68, 168-172 (2004) A FILMED EXPERIMENT ON TELEPHONE TELEPATHY WITH THE NOLAN SISTERS by RUPERT SHELDRAKE, HUGO GODWIN AND SIMON ROCKELL ABSTRACT: The ability of people to guess who is calling on the telephone has recently been tested experimentally in more than 850 trials. The results were positive and hugely significant statistically. Participants had four potential callers in distant locations. At the beginning of each trial, remote from the participant, the experimenter randomly selected one of the callers by the throw of a die, and asked the chosen caller to ring the participant. When the phone rang, the participant guessed who the caller was before picking up the receiver. By chance, about 25% of the guesses would have been correct. In fact, on average 42% were correct. The present experiment was an attempt to replicate previous tests, and was filmed for television. The participant and her callers were all sisters, formerly members of the Nolan Sisters band, popular in Britain in the 1980s. We conducted 12 trials in which the participant and her callers were 1 km apart. Six out of 12 guesses (50%) were correct. The results were significant at the p=0.05 level. For full text in html or pdf formats linked at site> Spike's expected response: well, hmmm, there's got to be a technical reason for this, but I've got to feed the kid now so I'll think about it next week if I can remember. My good pal John Clark's response: That Sheldrake idiot is a fool and made it all up and anyway they cheated and it's all BULLSHIT, I'm not wasting any time reading this crap. My response: Hmm, potentially interesting but the numbers are too small to be anything but extremely provisional, let's see some more replications by other people. Damien Broderick From spike66 at att.net Fri Jan 8 03:18:28 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 19:18:28 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B4690D0.5000200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> Message-ID: > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Damien Broderick > ... > > By chance, about 25% of the guesses would have been correct. > In fact, on average 42% were correct. The present experiment > was an attempt to replicate previous tests, and was filmed > for television. The participant and her callers were all > sisters, formerly members of the Nolan Sisters band, popular > in Britain in the 1980s. We conducted 12 trials in which the > participant and her callers were 1 km apart. Six out of 12 guesses > (50%) were correct. The results were significant at the p=0.05 level. > > > For full text in html or pdf formats linked at site> > > Spike's expected response: well, hmmm, there's got to be a > technical reason for this, but I've got to feed the kid now > so I'll think about it next week if I can remember... Damien Broderick {8^D I do need to go spend some quality time with my favorite larva, but before I do that, I would conjecture that an explanation for p=.05 is not necessarily needed. Many such experiments could have been done, but the only ones we hear about are those which go beyond 5% weird. Regarding the phrase "...was filmed for television..." that makes me suspicious right up front, because it forms a filter: in such a medium, only noteworthy stuff is worthy of note. Consider the Monty Python comedy troupe from the 70s. Their stuff was mostly unscripted, ad-lib, and yet it was hilarious, ja? One wonders how they could possibly be so funny, in front of a live audience no less. Well, they would clown around for hours, then pick the stuff that the audience loved, and that concentrated the laughs to where you have knights that say NI and so forth. They might have had to cut up for 20 hrs to get a good hilarious hour of TV. Similarly, there could have been a number of sister acts, and the only ones that made it to prime time were the Nolans. The most astounding thing about this experiment is not that they managed statistical significance, but rather that one pair of humans could spawn five larvae of such jaw dropping comliness as this group of stunning beauties. http://www.youtube.com/watch?v=P8ACght8QFE Oh my evolution, what a bevy of lovelies are these. I had never heard of them before you pointed to it, and for that I do thank you sir. Their music is gorgeous too. With those looks they could have sang like fingernails on a chalk board, and I would still like them, but that they should all be sisters and all sing like angels is far more remarkable than their performance at phone guessing. spike From thespike at satx.rr.com Fri Jan 8 03:32:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 21:32:14 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> Message-ID: <4B46A73E.2090200@satx.rr.com> On 1/7/2010 9:18 PM, spike wrote: > I would conjecture that an explanation for p=.05 is not > necessarily needed. Many such experiments could have been done, but the > only ones we hear about are those which go beyond 5% weird. > > Regarding the phrase "...was filmed for television..." that makes me > suspicious right up front, because it forms a filter: in such a medium, only > noteworthy stuff is worthy of note. If you're assuming a lack of probity in the experimenter, why not just say the whole thing was scripted or made up and have done with it? This is the bottom line with most skeptical retorts. What would it take to dispose of this canard? Sworn statements from everyone involved? No, people like that, they'd lie for profit, right? Or just because they're deluded loons. Or maybe I just invented it to sell my books. Damien Broderick From spike66 at att.net Fri Jan 8 04:14:46 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 20:14:46 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46A73E.2090200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> Message-ID: <0AB97B96F36746E79B5AE261B3960142@spike> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Damien Broderick > Sent: Thursday, January 07, 2010 7:32 PM > To: ExI chat list > Subject: Re: [ExI] Psi (no need to read this post you already > knowwhatitsays ) > > On 1/7/2010 9:18 PM, spike wrote: > > > ... > > > > Regarding the phrase "...was filmed for television..." that > makes me > > suspicious right up front, because it forms a filter: in such a > > medium, only noteworthy stuff is worthy of note. > > If you're assuming a lack of probity in the experimenter, why > not just say the whole thing was scripted or made up and have > done with it? This is the bottom line with most skeptical > retorts. What would it take to dispose of this canard? Sworn > statements from everyone involved? No, people like that, > they'd lie for profit, right? Or just because they're deluded > loons. Or maybe I just invented it to sell my books. > > Damien Broderick Far too modest are you, Damien. Your books sell themselves, by the outstanding nature of the content. Regarding the experimenters, did they actually claim to have no other groups than the Nolans? I am not accusing them of trickery. I may be willing to accuse the TV producers of amplifying the strangeness, but they should have no heartburn from this, for the job of TV people is to entertain. I had an idea which may have contributed to the remarkable outcome of the Nolan sisters experiment. The sisters would know each others' sleep patterns: who stayed up late, who was the early riser. If the call came in early or late, it may reduce the likely pool by a sister or two in some cases. The experimenters could have been unaware of this themselves, so there is no need for accusations. Another possibility is that family chatter would tip off the sisters if one was temporarily absent from the game: off to her father-in-law's hospital bed for instance, reducing by one the pool of possibilities. The remaining sisters may not think to suspend the game until rejoined by the fifth singing beauty. Dunno Damien. It might be something weird going on, but the proof is terriby elusive almost by design. In engineering, when one gets a greater than 3 sigma result in a measurement, the experiment is assumed flawed and often discarded, thus the footnote often being seen "3 sigma clipping." What the field needs at this point is not more weird experimental results but rather some plausible theoretical basis. Consider cryonics. No one took that seriously until 1986, when St. Eric of Drexler proposed theoretical nanobots which might some day read the configuration of a frozen brain, allowing it to be recreated in a non-frozen medium. With that theoretical basis, the whole notion gained a following, even if still small and fringy. The closest I can come to a theoretical explanation for precognition would be hordes of nanoprocessors (midichlorians?) which live within the body of the human, which communicate among themselves and could theoretically pass information around. Michael gets in an accident, his nanobots contact the nanobots in his sister's body, by physically understandable means. That they would do so if they exist should not be so very extraordinary, for things in the meter-scale world happen very slowly from their point of view. Their being involved in a tire screech and a bone-crushing impact would be analogous to humans watching an infestation of pine beetles devouring a forest. Damien, your being a creative SF writer qualifies you to come up with something better than midichlorians. The point is that for the psi notion to advance any further, its needs a plausible, even if unlikely, explanation more than it needs more experimental data. Lacking that explanation, all weird experimental outcomes will always be dismissed. spike From thespike at satx.rr.com Fri Jan 8 04:38:55 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 22:38:55 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <0AB97B96F36746E79B5AE261B3960142@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> Message-ID: <4B46B6DF.50504@satx.rr.com> On 1/7/2010 10:14 PM, spike wrote: > I had an idea which may have contributed to the remarkable outcome of the > Nolan sisters experiment. The sisters would know each others' sleep > patterns: who stayed up late, who was the early riser. If the call came in > early or late, it may reduce the likely pool by a sister or two in some > cases. The experimenters could have been unaware of this themselves, so > there is no need for accusations. > > Another possibility is that family chatter would tip off the sisters if one > was temporarily absent from the game: off to her father-in-law's hospital > bed for instance, reducing by one the pool of possibilities. The remaining > sisters may not think to suspend the game until rejoined by the fifth > singing beauty. Just to pop out this bit for comment: did you take a moment to read the linked paper, Spike? What you suggest here off the top of your head has absolutely nothing in common with the experiment as described. I can't easily imagine this being acceptable on ExIchat if someone was trying to laugh away/explain away results of professional stem cell work or solar power generation in space or other topics frequently mocked by those who can't be troubled to find out what's actually being done and proposed. Damien Broderick From msd001 at gmail.com Fri Jan 8 04:44:29 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 7 Jan 2010 23:44:29 -0500 Subject: [ExI] golden ratio discovered in quantum world In-Reply-To: <24B110CD0BA14840BA294A133E8051D9@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <24B110CD0BA14840BA294A133E8051D9@spike> Message-ID: <62c14241001072044i1ff0643ay875492563abf3e16@mail.gmail.com> On Thu, Jan 7, 2010 at 8:16 PM, spike wrote: > > Since we are discussing weirdness, check this: > > http://www.physorg.com/news182095224.html quantum fractal and phi ? interesting that a fractal with Hausdorf dimension of phi is connected but non overlapping so what are the physical properties of a system "tuned to quantum critical" ? Hopefully it proves to create something special like room-temperature superconductivity or an even more fantastic property. There didn't seem to be easy to access links to more information. :( I did read a lot about E8, but it was fantastically over my head. From spike66 at att.net Fri Jan 8 05:08:14 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 21:08:14 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46B6DF.50504@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> Message-ID: <8BF0812307DF4DB5A2C71BA55A635B80@spike> > ...On Behalf Of Damien Broderick ... > On 1/7/2010 10:14 PM, spike wrote: > > > I had an idea which may have contributed to the remarkable > outcome of > > the Nolan sisters experiment. The sisters would know each others' > > sleep patterns... > > Just to pop out this bit for comment: did you take a moment > to read the linked paper, Spike? What you suggest here off > the top of your head has absolutely nothing in common with > the experiment as described... Damien Broderick I confess I did not read the details of the experiment, and I accept my scolding. That being said I reiterate my contention that what is missing is some kind of theoretical, even if wildly implausible, explanation. To that end, I propose the following: let us imagine explanations for psi or any supernatural phenom using physics that we can theoretically understand. I proposed midichlorians before, and these are at least theoretically possible. I don't see any reason why it is physically impossible for nanoprocessors to exist, a few trillion atoms, still small enough to be extremely difficult to identify, which ride in or on humans or beasts, watching and listening, learning and communicating among themselves and with their hosts to some extent. I will attempt another one, not original with me of course, the idea having been around for some time: we are already living in a post singularity world, we already exist as software, we are avatars. Weak low-gain feedback loops do exist, intentionally placed in our software, to cause a few observations that defy our explanation or understanding, in order to keep us wondering and searching. Examples would be psi, quantum mechanics, the baffling double slit experiment, the constancy of the speed of light, the nature of love, the mystery of life itself. With this game, I am not asking anyone to actually believe that there are midichlorians or that we are software with intentionally programmed strange-loops, but rather I am asking you to dig deep within your creative minds to propose some kind of wildly implausible but theoretically possible explanations for supernatural phenomena. spike From steinberg.will at gmail.com Fri Jan 8 05:27:41 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 8 Jan 2010 00:27:41 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <8BF0812307DF4DB5A2C71BA55A635B80@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> Message-ID: <4e3a29501001072127t553972aen962506f9435ef204@mail.gmail.com> > > On Fri, Jan 8, 2010 at 12:08 AM, spike wrote: > > With this game, I am not asking anyone to actually believe that there are > midichlorians or that we are software with intentionally programmed > strange-loops, but rather I am asking you to dig deep within your creative > minds to propose some kind of wildly implausible but theoretically possible > explanations for supernatural phenomena. > > spike > ok I just thought of something sort of neat .The sisters didn't have enough trials to be significant, but this might explain other things. A lot of information in the brain is constantly changing. Maybe changing information is modeled mathematically by universal neurological constructs in all humans, to sort away information or something. The info suddenly pops up once in a while in your head, we all have times when we remember something suddenly; it is possible that this remembering is based on a specific mathematical form as well. So: Some people are very close--family, friends, lovers. Many of their memories are identical but for the viewpoint. These memories, as they are stored (objectively?) in their brains (with similar structures because of genetics and upbringing as well as environment and shared experience by non-kin,) begin the process of slow change. These people interact for a long time and thus accrue many of these memorial constructs, gradually morphing in the brain. In the future, one of them has been, consciously or subconsciously, recalling ideas and memories somehow tied to the relationship between himself and the other. That other, with the same thing happening in her brain, recalls the inverse relationship. So when A thinks about B and makes the call, B is already subconsciously anticipating it. It that these sorts of things can happen once in a while? People who are strongly connected are imbued with many of those strong memories, and so, in a rare occurrence, I might know my dad is calling because we both had been thinking about that time we played baseball and it made him want to call me. It might work even better in the short term--my girlfriend and I see a sign on Monday that mentally resurfaces on Wednesday and prompts a call. Prediction is based on physical interpretation of objective information. This is a possible physical method of interpreting objective information, so is this a good enough start or is it too woefully wacky again? _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Fri Jan 8 05:27:56 2010 From: emlynoregan at gmail.com (Emlyn) Date: Fri, 8 Jan 2010 15:57:56 +1030 Subject: [ExI] Alka-Seltzer added to spherical water drop in microgravity Message-ID: <710b78fc1001072127r54563878pf81ae32ee517176a@mail.gmail.com> Alka-Seltzer added to spherical water drop in microgravity http://www.youtube.com/watch?v=bgC-ocnTTto&feature=player_embedded How cool is that? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From thespike at satx.rr.com Fri Jan 8 05:29:31 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 07 Jan 2010 23:29:31 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <8BF0812307DF4DB5A2C71BA55A635B80@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> Message-ID: <4B46C2BB.5000003@satx.rr.com> On 1/7/2010 11:08 PM, spike wrote: > I am not asking anyone to actually believe that there are > midichlorians or that we are software with intentionally programmed > strange-loops, but rather I am asking you to dig deep within your creative > minds to propose some kind of wildly implausible but theoretically possible > explanations for supernatural phenomena. How did "supernatural" get into the discussion? Ah yes, recall my earlier speculation about anaphylactic shock triggered by suspicions of religion? There's no shortage of weird ideas to explain the weird phenomena labeled "psi"--my OUTSIDE THE GATE book canvases quite a few--but they do tend to be built out of handwavium at the moment, just as continental drift was before plate tectonics. Observed nonlocality in time (information exchanges outside the light cone) is a challenge to any routine explanation except maybe the simulation narrative. But there are physicists with ideas on that, such as Richard Shoup of the Boundary Institute.** My bottom line *isn't* the absence of a theory; it's whether there's any solid evidence for the weirdness. And there is. **see for example (Nobody will, I expect.) Damien Broderick From spike66 at att.net Fri Jan 8 05:47:40 2010 From: spike66 at att.net (spike) Date: Thu, 7 Jan 2010 21:47:40 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46C2BB.5000003@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com><8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> Message-ID: <3A63D7F97C254A96BEF12AE280A0DF06@spike> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Damien Broderick > Sent: Thursday, January 07, 2010 9:30 PM > To: ExI chat list > Subject: Re: [ExI] Psi (no need to read this post you already > knowwhatitsays ) > > On 1/7/2010 11:08 PM, spike wrote: > > I am not asking anyone to actually believe that there are > > midichlorians or that we are software... > > How did "supernatural" get into the discussion? Ah yes, > recall my earlier speculation about anaphylactic shock > triggered by suspicions of religion? I use the term in the general sense, not about god or angels, but rather anything outside our currently understood framework. Supernatural would include advanced spacefaring species for instance, which have a perfectly natural explanation: they evolved, they became technologically advanced, they went looking around. The notion that we are software does require a programmer beyond us currently however. spike From msd001 at gmail.com Fri Jan 8 06:03:59 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 8 Jan 2010 01:03:59 -0500 Subject: [ExI] Alka-Seltzer added to spherical water drop in microgravity In-Reply-To: <710b78fc1001072127r54563878pf81ae32ee517176a@mail.gmail.com> References: <710b78fc1001072127r54563878pf81ae32ee517176a@mail.gmail.com> Message-ID: <62c14241001072203y4ce66d8fj5c651e088db3f605@mail.gmail.com> On Fri, Jan 8, 2010 at 12:27 AM, Emlyn wrote: > Alka-Seltzer added to spherical water drop in microgravity > > http://www.youtube.com/watch?v=bgC-ocnTTto&feature=player_embedded > > How cool is that? Very. :) If you don't have a microgravity environment available to reproduce this at home, try one of the non-newtonian fluid oscillators (cornstarch on a speaker cone) http://www.youtube.com/results?search_query=non-newtonian+fluid From thespike at satx.rr.com Fri Jan 8 06:04:06 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 08 Jan 2010 00:04:06 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <3A63D7F97C254A96BEF12AE280A0DF06@spike> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com><8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <3A63D7F97C254A96BEF12AE280A0DF06@spike> Message-ID: <4B46CAD6.4090608@satx.rr.com> On 1/7/2010 11:47 PM, spike wrote: > I use the term in the general sense, not about god or angels, but rather > anything outside our currently understood framework. So quantum gravity is supernatural now, and conventional QT was supernatural in 1890? This is a very unusual version of the idiom. Why not just use "paranormal"? It's a customary usage and doesn't imply any particular metaphysical stance, IMO, as "supernatural" does in the vernacular. At some stage, when the phenom are understood (or adequately debunked), they will indeed become "normal" but that shift in perspective strikes me as less paradoxical. (Does the paradoxical become doxical when understood? Could genetic engineering create a parrot ox? and so on.) Damien Broderick From max at maxmore.com Fri Jan 8 06:07:13 2010 From: max at maxmore.com (Max More) Date: Fri, 08 Jan 2010 00:07:13 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) Message-ID: <201001080607.o0867Qa2014039@andromeda.ziaspace.com> > > >Spike's expected response: well, hmmm, there's got to be a technical >reason for this, but I've got to feed the kid now so I'll think about it >next week if I can remember. My good pal John Clark's response: That >Sheldrake idiot is a fool and made it all up and anyway they cheated and >it's all BULLSHIT, I'm not wasting any time reading this crap. My >response: Hmm, potentially interesting but the numbers are too small to >be anything but extremely provisional, let's see some more replications >by other people. Yes, the numbers are too small. Also: How many negative trials were *not* reported. In Psi experiments, we rarely hear about "the silent evidence" as Taleb calls it. Max From jonkc at bellsouth.net Fri Jan 8 05:48:03 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 8 Jan 2010 00:48:03 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <3BB74389-B736-443C-BF77-BED2DA33D78E@bellsouth.net> Message-ID: On Jan 7, 2010, Aware wrote: >> Because we learned from the history of Evolution that consciousness is easy >> but intelligence is hard. > > Well, that response clearly adds nothing to the discussion Which word didn'y you understand? > and you stripped out my supporting text. I quote just enough material for you to know which part I'm responding to, feel free to strip my text in return, in fact I wish you would. The respond button is a diabolical invention, if people had to laboriously type in all quoted material I'll bet people would get to the point mighty damn fast. >>> before evolutionary processes stumbled upon the additional, supervisory, hack of self-awareness >> >> What you just said is logically absurd. > > Really? Yes really. > I note that you're not asking for any clarification. None was needed, you were perfectly clear, just illogical. >> If consciousness doesn't effect intelligence > > Do you mean literally "If consciousness doesn't produce intelligence" > or do you mean "If consciousness doesn't affect intelligence"? Put it this way, if intelligence didn't automatically produce consciousness then we wouldn't have it because Evolution couldn't even see it much less develop it. > appears that you must harbor a mystical notion The guy is known for disliking mysticism so lets call him a mystic. Boy I never heard that one before! > of "consciousness", that contributes to the somewhat > "intelligent" behavior of the amoeba I make no claim that an amoeba is intelligent or even "intelligent". Others have said that but not me; but I did say that if you accept that hypothetical then it is most certainly conscious. > despite its apparent lack of the neuronal apparatus necessary to support a sense of self. Unless you have just made the scientific discovery of the ages nobody knows what sort of neuronal apparatus are necessary for consciousness. >> there is no way Evolution could have "stumbled upon" the trick of generating consciousness > > It may be relevant that the way evolution (It's not clear why you would capitalize that word) If people can capitalize God I can capitalize Evolution, Scientific Method too. > works is always in terms of blind, stumbling, random variation. That is true but if Evolution stumbles onto something that doesn't help its genes get into the next generation then it has discovered nothing and just keeps on stumbling . > the extent the organism's fitness would be enhanced by the ability to model possible variations on itself Fine, if you're right then the ability of an organism to model itself would change its behavior in such a way that it is more likely to survive than if it lacked this ability; and observing behavior is what the Turing Test is all about. >> In short if even one conscious being exists on Planet Earth >> and if Evolution is true then the Turing Test works; > > Huh? If there were only one conscious being, then wouldn't that have > to be the one judging the Turing Test? Yes. > And if there is no other conscious being, how could any (non-conscious by definition) subject > pass the test So what? The wouldn't pass the test nor should they if the test is valid. > such that the TT would be shown to "work"? The Turing Test will never be proven to work, few things outside of pure mathematics can be, but if you assume that Evolution is true and knowing from direct experience that at least one conscious being exists then you can deduce that the Turing Test must work. > It seems to me that we observe the existence of both classes of > evolved organisms. I belong to the class of conscious evolved organisms and I believe you belong to the same class because you pass the Turing Test. Of course you have no reason to think I'm conscious because you don't believe in the Turing Test, but never-mind. You know from direct experience that you are conscious so how did you come to be? If the same behavior can be produced without consciousness (and that's why the Turing Test doesn't work) then I repeat my question, how did you come to be? Evolution doesn't find or retain traits that don't help an organism survive. If Evolution can see something then so can the Turing Test, and consciousness is something. > > I'm guessing that our disagreement here comes down to different usage > and meaning of the terms "intelligence" and "consciousness" I don't think so. > and it might be significant that you stripped out all evidence and results of > my efforts to effectively define them. You seem not to play fair, so it's not much fun. Oh for God's sake! If somebody wants to read your original post again they certainly have the means of doing so. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Jan 8 06:33:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 08 Jan 2010 00:33:57 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) In-Reply-To: <201001080607.o0867Qa2014039@andromeda.ziaspace.com> References: <201001080607.o0867Qa2014039@andromeda.ziaspace.com> Message-ID: <4B46D1D5.7010001@satx.rr.com> On 1/8/2010 12:07 AM, Max More wrote: > How many negative trials were *not* reported. By "negative trials" I assume you mean something like "runs of trials with outcomes that were not significantly different from mean chance expectation." By "not reported" I assume you mean "deceptively hidden or discarded." My estimate in this case: None of them. Nobody has ever questioned Dr. Sheldrake's probity (although some of his theories are pretty hard to take seriously). Well, Randi did, once, until he was shown to have lied. > In Psi experiments, we > rarely hear about "the silent evidence" In analyses of psi experiments by anomalies researchers, actually we hear all the time about the likelihood and magnitude of what is termed "the file drawer." That's where non-significant results are supposed by critics to be hidden away. The reality is that the file drawer can't *possibly* hide sufficient dud data to account for the observations. I take it you have reason to doubt this; what is your evidence? Here's Dean Radin's THE CONSCIOUS UNIVERSE (not a bad summary) on the file-drawer effect (selective reporting) in one major protocol, and this is now standard: (page 79-80) ?Another factor that might account for the overall success of the ganzfeld studies was the editorial policy of professional journals, which tends to favor the publication of successful rather than unsuccessful studies. This is the ?file-drawer? effect mentioned earlier. Parapsychologists were among the first to become sensitive to this problem, which affects all experimental domains. In 1975 the Parapsychological Association?s officers adopted a policy opposing the selective reporting of positive outcomes. As a result, both positive and negative findings have been reported at the Paraspsychological Association?s annual meetings and in its affiliated publications for over two decades. Furthermore, a 1980 survey of parapsychologists by the skeptical British psychologist Susan Blackmore had confirmed that the file-drawer problem was not a serious issue for the ganzfeld meta-analysis. Blackmore uncovered nineteen complete but unpublished ganzfeld studies. Of those nineteen, seven were independently successful with odds against chance of twenty to one or greater. Thus while some ganzfeld studies had not been published, Hyman and Honorton agreed that selective reporting was not an important issue in this database. Still, because it is impossible to know how many other studies might have been in file drawers, it is common in meta-analyses to calculate how many unreported studies would be required to nullify the observed effects among the known studies. For the twenty-eight direct-hit ganzfeld studies, this figure was 423 file-drawer experiments, a ratio of unreported-to-reported studies of approximately fifteen to one. Given the time and resources it takes to conduct a single ganzfeld session, let alone 423 hypotheitcal unrepoted experiments, it is not surprising that Hyman agreed with Honorton that the file-drawer issue could not plausibly account for the overall results of the psi ganzfeld database. There were simply not enough experimenters around to have conducted those 423 studies. Thus far, the proponent and the skeptic had agreed that the results could not be attributed to chance or to selective reporting practices.?> Damien Broderick From max at maxmore.com Fri Jan 8 08:40:31 2010 From: max at maxmore.com (Max More) Date: Fri, 08 Jan 2010 02:40:31 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) Message-ID: <201001080840.o088ec6A009055@andromeda.ziaspace.com> Damien: I know how frustrating it must be for you to discuss the psi topic on this list. It must be how I often feel when I suggest that maybe climate models are perhaps not highly reliable guides to the present and future... I would be delighted if psi phenomena turned out to be real. For one thing, it would annoy John Clark, who is holding his reflexive rationalism a little too tightly and not allowing it to breath. (John, quite often I enjoy your sharp, brutal expressions of sanity, but on this topic I think you're being overly curt and quick.) For another thing, it would shake up physics and expand our horizons and potentially open new avenues to enhancement. So, I have nothing intrinsically against it. The sources of my strong resistance to accepting claims of real psi phenomena are mainly (a) that it seems to conflict with our best-established knowledge [which, of course, is not an ultimate reason for dismissal], and (b) my past experience with this topic, both in my personal experience and my (now long past, it's true) extensive reading on the topic. My attraction to the idea of paranormal powers would be obvious to anyone who knew me in the mid to late 1970s. I spent quite a bit of time trying to develop psychic abilities when I was about 11 to 14. I read many books, practiced quite a few exercises and magic rituals, tried out several groups (including dowsers, Kabbalists, Transcendental Meditators (who were then promoting the ability to develop "sidhi's" or special powers like invisibility, levitation, and walking through walls), and the Rosicrucians. At the time, I lacked the intellectual tool kit for structured critical thinking, yet I soon saw reasons for doubting each and every claim. Others insisted that I was good at dowsing underground water paths, even though it was obvious that no evidence existed to support the claim. For a while (when I was 12, possibly 13), I convinced myself that I could make a weight-on-a-string swing by the power of my mind, but I eventually realized it my unconscious and very slight movements -- as shown by my inability to cause swinging if the top of the string wasn't connected to my finger... etc. That is, my own experience in both practice and reading revealed the sheer amount of crap out that went under the psi banner. (A search for "psychic" under Books at Amazon shows that all this crap is still there.) As for specific critiques, I don't remember many to cite at the moment. One that I do recall is John Sladek's book, The New Apocrypha. >By "negative trials" I assume you mean something like "runs of trials >with outcomes that were not significantly different from mean chance >expectation." By "not reported" I assume you mean "deceptively hidden or >discarded." Actually, no, that's not (only or mostly) what I mean -- although that is certainly possible and seems to have happened repeatedly in the past. There's a general publication bias against negative results. It's a problem in numerous fields of study. People getting negative results are less likely to write them up carefully and submit them. Publications are less likely to publish them. Still, thanks for your comments and pointers on this issue. It's good see some attention to the problem of silent evidence. I don't buy what I just read on that without more follow-up, but it's an encouraging sign. It maybe that my resistance to claims of psi phenomena are just sour grapes, since in my own life I've never observed the slightest hint of psychic events or abilities. However, past experience makes me extremely reluctant to devote significant time to looking at new evidence (esp. when so much previous new evidence ended up looking bad). That doesn't mean I am certain psi phenomena are all false. I would like your book on the topic, Damien. But, given my past experience and the apparently minor nature of claimed results, it's just not likely that it's going to be a top priority. I know that's annoying and frustrating, but I hope you can understand why I see it that way (and, I suspect, quite a few other people on this list). If it turns out that psychic phenomena really don't exist, it will be disappointing, but perhaps technology can allow us to convincingly fake it or simulate it (no, this isn't an invitation to mention Chinese Rooms). I hope this post is reasonably coherent. Natasha has already got out of bed and gently told me off for staying up so late. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From max at maxmore.com Fri Jan 8 08:43:14 2010 From: max at maxmore.com (Max More) Date: Fri, 08 Jan 2010 02:43:14 -0600 Subject: [ExI] I'm no fool Message-ID: <201001080843.o088hLc2019286@andromeda.ziaspace.com> With the current discussion about psi, and our continuing interest in rational thinking... Recently, I heard a line in a South Park episode that I found extremely funny and really quite deep, paradoxical, and illuminating: "I wasn't born again yesterday" (This was in South Park, season 7, "Christian Rock Hard") Max From cetico.iconoclasta at gmail.com Fri Jan 8 10:27:46 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Fri, 8 Jan 2010 08:27:46 -0200 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> Message-ID: <02e601ca904d$39c73ac0$fd00a8c0@cpdhemm> >>No, not really, it is more passive than that. I don't really have any >>control over it happening. >> I had an image of your leg, below your knee, come to mind, and I also had >> a tree come to mind, >> but I wouldn't know what the heck that means. You've got it partially right (despite being very very vague). It was the tibia and the knee in a motorcycle accident, but there was no tree involved whatsoever, only two puppy dogs that decided to chase each other in a busy street. This little completely unscientific experiment doesn't prove (or disprove) anything. It was just for the sake of curiosity. From cetico.iconoclasta at gmail.com Fri Jan 8 10:35:46 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Fri, 8 Jan 2010 08:35:46 -0200 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM><79471C131D7F4EE28EE05A725ED29AED@spike><4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com><02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm><398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> Message-ID: <02f801ca904e$5853dba0$fd00a8c0@cpdhemm> spike> Perhaps your young ears were able to detect the ultra high frequency sound > that the 70s era telephone electromechanical devices would make a couple > of > seconds before the phone rang. Recall those things had a capacitor in > them, This can happen. I myself can hear the high freq noise of a tube-tv being turned on anywhere in the house. It's very very annoying to the point that I'm really glad crt tvs are almost all gone. From pharos at gmail.com Fri Jan 8 11:42:40 2010 From: pharos at gmail.com (BillK) Date: Fri, 8 Jan 2010 11:42:40 +0000 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) In-Reply-To: <4B46D1D5.7010001@satx.rr.com> References: <201001080607.o0867Qa2014039@andromeda.ziaspace.com> <4B46D1D5.7010001@satx.rr.com> Message-ID: On 1/8/10, Damien Broderick wrote: > Thus far, the proponent and the skeptic had agreed that the results could > not be attributed to chance or to selective reporting practices.? > > Just because I can't work out how the trick was done, doesn't validate the magic trick. Magicians are experts at their trade. (Though people can be quickly trained in how to do psychic 'cold readings'). Similarly, it is not down to me to try to examine the experimental protocol, double-blind checks, honesty or self-delusion of participants, flawed statistical analysis, bias in testing, etc. etc. Other scientists have to able to replicate the experiments. Short tests that involve guessing one of four numbers (or one of four people phoning in), or one of five shapes, are very susceptible to producing runs of 'above or below average' results. That's why when very long runs are done the results do approach the expected average (or the psychics get so bored that their powers fade out). Odd things happen all the time. One man has been struck by lightning ten times, somebody has to win the lottery (sometimes more than once), some gamblers get lucky streaks and other gamblers get losing streaks, and so on. These things happen in a random universe. Random doesn't mean always average. But, anyway, what's the point? If the psi effects are pretty much unpredictable / random then they cannot be used for anything. I want psi powers that are usable and practical. If I could think hard to get friends to phone me, it would save me a fortune in phone bills. Similarly, they ought to know not to phone me when I'm in the shower or in the middle of dismantling a motorcycle engine in the middle of the living-room. BillK From stefano.vaj at gmail.com Fri Jan 8 11:46:26 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 8 Jan 2010 12:46:26 +0100 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46A73E.2090200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> Message-ID: <580930c21001080346x407eb5b7vac63920b890b3b1b@mail.gmail.com> 2010/1/8 Damien Broderick : > If you're assuming a lack of probity in the experimenter, why not just say > the whole thing was scripted or made up and have done with it? This is the > bottom line with most skeptical retorts. What would it take to dispose of > this canard? Sworn statements from everyone involved? No, people like that, > they'd lie for profit, right? Or just because they're deluded loons. Or > maybe I just invented it to sell my books. Of course, the general idea behind science is that if you do not believe something which is being reported, you do not rely on ad personam arguments, you go and see for yourself. Now, even though I have never given it a try, I am inclined to believe that anybody trying to guess cards beyond a wall ends up finding, if he is dedicated enough, a marginal discrepancy between the actual results and the expected statistical distribution which becomes more and more unlikely as the number of trials grows (an interesting experiment that I am not aware has ever been tried, and might have some weight with respect to our little discussion on AGI, is how a PC would perform in similar circumstances: worse, better, just the same?). Some much more dramatic anedoctes reported in your book are really non-repeatable anyway, so I think that you may believe or not that something strange is happening as you please. In any event, strange facts are bound to happen after all, irrespective of the fact that organic brains are involved at all... -- Stefano Vaj From gts_2000 at yahoo.com Fri Jan 8 13:44:39 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 05:44:39 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <640685.78939.qm@web36505.mail.mud.yahoo.com> --- On Thu, 1/7/10, Stathis Papaioannou wrote: >> Yes and now you see why I claim Cram's surgeon must go > in repeatedly to patch the software until his patient passes > the Turing test: because the patient has no experience, the > surgeon must keep working to meet your logical requirements. > The surgeon finally gets it right with Service Pack 9076. > Too bad his patient can't know it. > > The surgeon will be rightly annoyed if the tweaking and > patching has not been done at the factory so that the p-neurons just > work. My point here concerns the fact that because experience affects behavior including neuronal behavior, and because the patient presents with symptoms indicating no experience of understanding language, and because on my account p-neurons != c-neurons, the p-neurons cannot work as advertised "out of the box". The initial operation fails miserably. The surgeon must then keep reprogramming and replacing more natural neurons throughout the patient's brain. He succeeds eventually in creating intelligent and coherent behavior in his patient, but it costs the patient most or all his intentionality. -gts From stefano.vaj at gmail.com Fri Jan 8 13:46:33 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 8 Jan 2010 14:46:33 +0100 Subject: [ExI] atheism In-Reply-To: <504420.54703.qm@web81603.mail.mud.yahoo.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> <0AB6C80C-5087-438F-8FF5-27CE4BB37AC1@mac.com> <580930c21001061239o26cc8359q675d088d0213c5ce@mail.gmail.com> <504420.54703.qm@web81603.mail.mud.yahoo.com> Message-ID: <580930c21001080546y76cc3988md690611d514c7e9f@mail.gmail.com> 2010/1/7 Kevin Freels : > From: Stefano Vaj > To: ExI chat list > Sent: Wed, January 6, 2010 2:39:10 PM > Subject: Re: [ExI] atheism >> What I mean there is that while it is perfectly normal in everyday >> life to believe things without? any material evidence thereof (the >> existence of cats and sleep does not tell me anything about the >> current state of my cat any more than the existence of number 27 on >> the roulette does not provide any ground for my belief that this is >> the number which is going to win, and therefore on which I should bet, >> at the next throw of the ball), what is abnormal is to claim that such >> assumptions are a philosophical necessity or of ethical relevance. > > It is quite different to say "I am convinced there is no God" than it is to > say "I am not convinced there is a God" > There is no evidence disproving the existence of God so to believe there is > no god is indeed a faith in itself. Why, not any more than acquitting a man from a murder charge because there is no evidence that he is implicated is equal to convicting him... :-) We cannot avoid to form beliefs on things thare are not proved (e.g., whether tomorrow will rain or not) but this has nothing to with "faith" in a monotheistic sense. -- Stefano Vaj From estropico at gmail.com Fri Jan 8 13:58:38 2010 From: estropico at gmail.com (estropico) Date: Fri, 8 Jan 2010 13:58:38 +0000 Subject: [ExI] ExtroBritannia: The Friendly AI Problem: how can we ensure that superintelligent AI doesn't terminate us? Message-ID: <4eaaa0d91001080558s3f06d8dfq88a222ac42b0a30@mail.gmail.com> The Friendly AI Problem: how can we ensure that superintelligent AI doesn't terminate us? Venue: Room 416, Birkbeck College. Date: Saturday 23rd December. Time: 2pm-4pm. About the talk: Suppose that humans succeed in understanding just what it is about the human brain that makes us smart, and manage to port that over to silicon based digital computers. Suppose we succeed in creating a machine that was smarter than us. What would it do? Would we benefit from it? This talk will present arguments that show that there are many different ways that the creation of human-level AI could spell disaster for the human race. It will also cover how we might stave off that disaster - how we might create a superintelligence that is benevolent to the human race. About the speaker: Roko Mijic graduated from the University of Cambridge with a BA in Mathematics, and the Certificate of Advanced Study in Mathematics. He spent a year doing research into the foundations of knowledge representation at the University of Edinburgh and holds an MSc in informatics. He is currently an advisor for the Singularity Institute for Artificial Intelligence. Roko writes the blog "Transhuman goodness" For more details about Roko, see RokoMijic.com ** There's no charge to attend this meeting, and everyone is welcome. There will be plenty of opportunity to ask questions and to make comments. **Discussion will continue after the event, in a nearby pub, for those who are able to stay. ** Why not join some of the Extrobritannia regulars for a drink and/or light lunch beforehand, any time after 12.30pm, in The Marlborough Arms, 36 Torrington Place, London WC1E 7HJ. To find us, look out for a table where there's a copy of the book "Beyond AI: creating the conscience of the machine" displayed. ** Venue: Room 416 is on the fourth floor (via the lift near reception) in the main Birkbeck College building, in Torrington Square (which is a pedestrian-only square). Torrington Square is about 10 minutes walk from either Russell Square or Goodge St tube stations www.extrobritannia.blogspot.com UK Transhumanist Association: www.transhumanist.org.uk From estropico at gmail.com Fri Jan 8 14:06:55 2010 From: estropico at gmail.com (estropico) Date: Fri, 8 Jan 2010 14:06:55 +0000 Subject: [ExI] ExtroBritannia: The Friendly AI Problem: how can we ensure that superintelligent AI doesn't terminate us? In-Reply-To: <4eaaa0d91001080558s3f06d8dfq88a222ac42b0a30@mail.gmail.com> References: <4eaaa0d91001080558s3f06d8dfq88a222ac42b0a30@mail.gmail.com> Message-ID: <4eaaa0d91001080606q31d35240w3cabb7d08f4c25d0@mail.gmail.com> Ooops! Obviously I meant the 23rd of **January**! Cheers, Fabio On Fri, Jan 8, 2010 at 1:58 PM, estropico wrote: > The Friendly AI Problem: how can we ensure that superintelligent AI > doesn't terminate us? > > Venue: Room 416, Birkbeck College. > Date: Saturday 23rd December. > Time: 2pm-4pm. > > About the talk: > > Suppose that humans succeed in understanding just what it is about the > human brain that makes us smart, and manage to port that over to > silicon based digital computers. Suppose we succeed in creating a > machine that was smarter than us. > > What would it do? Would we benefit from it? > > This talk will present arguments that show that there are many > different ways that the creation of human-level AI could spell > disaster for the human race. It will also cover how we might stave off > that disaster - how we might create a superintelligence that is > benevolent to the human race. > > About the speaker: > > Roko Mijic graduated from the University of Cambridge with a BA in > Mathematics, and the Certificate of Advanced Study in Mathematics. He > spent a year doing research into the foundations of knowledge > representation at the University of Edinburgh and holds an MSc in > informatics. He is currently an advisor for the Singularity Institute > for Artificial Intelligence. > > Roko writes the blog "Transhuman goodness" > > For more details about Roko, see RokoMijic.com > > ** There's no charge to attend this meeting, and everyone is welcome. > There will be plenty of opportunity to ask questions and to make > comments. > > **Discussion will continue after the event, in a nearby pub, for those > who are able to stay. > > ** Why not join some of the Extrobritannia regulars for a drink and/or > light lunch beforehand, any time after 12.30pm, in The Marlborough > Arms, 36 Torrington Place, London WC1E 7HJ. To find us, look out for a > table where there's a copy of the book "Beyond AI: creating the > conscience of the machine" displayed. > > ** Venue: > > Room 416 is on the fourth floor (via the lift near reception) in the > main Birkbeck College building, in Torrington Square (which is a > pedestrian-only square). Torrington Square is about 10 minutes walk > from either Russell Square or Goodge St tube stations > > www.extrobritannia.blogspot.com > UK Transhumanist Association: www.transhumanist.org.uk > From gts_2000 at yahoo.com Fri Jan 8 14:00:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 06:00:40 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: Message-ID: <657810.82216.qm@web36506.mail.mud.yahoo.com> Somebody said... > Searle and Gordon aren't saying that machine consciousness isn't > possible. ?If you pay attention you'll see that once in a while > they'll come right out and say this, at which point you think they've > expressed an inconsistency. ?They're saying that even though it's > obvious that some machines (e.g. humans) do have consciousness, it's > also clear that no formal system implements semantics. And they're > correct. Right. For many years I despised Searle, considering him some sort of anti-tech philosophical Luddite. Then I took the time to really study him. I learned that I had based my opinion on a misunderstanding. Even if he's wrong, you won't find many people who better understand the challenge of strong AI. -gts From stathisp at gmail.com Fri Jan 8 15:14:25 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 02:14:25 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <640685.78939.qm@web36505.mail.mud.yahoo.com> References: <640685.78939.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/9 Gordon Swobe : > --- On Thu, 1/7/10, Stathis Papaioannou wrote: > >>> Yes and now you see why I claim Cram's surgeon must go >> in repeatedly to patch the software until his patient passes >> the Turing test: because the patient has no experience, the >> surgeon must keep working to meet your logical requirements. >> The surgeon finally gets it right with Service Pack 9076. >> Too bad his patient can't know it. >> >> The surgeon will be rightly annoyed if the tweaking and >> patching has not been done at the factory so that the p-neurons just >> work. > > My point here concerns the fact that because experience affects behavior including neuronal behavior, and because the patient presents with symptoms indicating no experience of understanding language, and because on my account p-neurons != c-neurons, the p-neurons cannot work as advertised "out of the box". The initial operation fails miserably. The surgeon must then keep reprogramming and replacing more natural neurons throughout the patient's brain. He succeeds eventually in creating intelligent and coherent behavior in his patient, but it costs the patient most or all his intentionality. You say experience affects behaviour, but you are quite happy with the idea that a zombie can reproduce human behaviour without having experience. So what is to stop a p-neuron from behaving like a c-neuron despite lacking experience if nothing stops the zombie from acting like a human, which is arguably a much harder task? -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Jan 8 15:28:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 07:28:36 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: Message-ID: <684147.29925.qm@web36506.mail.mud.yahoo.com> --- On Thu, 1/7/10, Stathis Papaioannou wrote: > The NCC is either gibberish or something trivially obvious, > like oxygen, since without it neurons wouldn't work and you > would lose consciousness. As I already mentioned, the presence of oxygen clearly plays a role in whatever physical conditions must exist in the brain for it to have subjective experience. The sum of all those physical conditions = the NCC. Neuroscientists will eventually understand the NCC in great detail. Whatever it turns out to be, we will no doubt someday have the ability to simulate it along with the rest of the brain on a computer just as we can simulate any other physical thing on a computer. And that computer simulation will *appear* conscious, not much different from the way simulations of ice cubes appear cold. -gts From gts_2000 at yahoo.com Fri Jan 8 15:40:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 07:40:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <32395.39984.qm@web36505.mail.mud.yahoo.com> --- On Fri, 1/8/10, Stathis Papaioannou wrote: > You say experience affects behaviour, but you are quite > happy with the idea that a zombie can reproduce human behaviour without > having experience. Yes. > So what is to stop a p-neuron from behaving like a > c-neuron despite lacking experience if nothing stops the > zombie from acting like a human, which is arguably a much harder task? Nothing. Just as philosophical zombies are logically possible, so too are p-neurons and for the same reasons. But again the first surgeon who tries a partial replacement in Wernicke's area with p-neurons will run into serious complications. The p-neurons will require lots of programming and patches and so on to compensate for the patient's lack of experience, complications the surgeon did not anticipate because like you he does not realize that the p-neurons don't give the patient the experience of his own understanding. -gts From stathisp at gmail.com Fri Jan 8 15:48:29 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 02:48:29 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/8 Aware : > But if you watch carefully, he accepts functionalism, IFF the > candidate machine/substrate actually reproduces the function of the > brain. But then he goes on to show that for any formal description of > any machine, there's no place IN THE MACHINE where understanding > actually occurs. He explicitly says that a machine could fully reproduce the function of a brain but fail to reproduce the consciousness of the brain. He believes that the consciousness resides in the actual substrate, not the function to which the substrate is put. If you want to extend "function" to include consciousness then he is a functionalist, but that is not a conventional use of the term. > He's right about that. He actually *does* think there is a place in the machine where understanding occurs, if the machine is a brain. > But here he goes wrong: ?He claims that human brains obviously do have > understanding, and suggests that he has therefore proved that there is > something different about attempts to produce the same in machines. > > But there's no understanding in the human brain, either, nor any > evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER. Right. > We don't have understanding in our brains, but we don't need it. > Never did. ?We have only actions, which appear (with good reason) to > be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE > ACTOR ITSELF. Searle would probably say there's no observer in a computer. > Sure it's non-intuitive. ?It's Zen. ?In the true, non-bastardized > sense of the word. And if you're gonna design an AI that displays > consciousness, then it would be helpful to understand this so you > don't spin your wheels trying to figure out how to implement it. You could take the brute force route and copy the brain. > > >>> So I suggest (again) to you and Gordon, and Searle. that you need to >>> broaden your context. ?That there is no essential consciousness in the >>> system, but in the recursive relation between the observer and the >>> observed. Even (or especially) when the observer and observed are >>> functions of he same brain, you get self-awareness entailing the >>> reported experience of consciousness, which is just as good because >>> it's all you ever really had. >> >> Isn't the relationship between the observer and observed a function of >> the observer-observed system? > > No. ?The system that is being observed has no place in it where > meaning/semantics/qualia/intentionality can be said to exist. ?If you > look closely all you will find is components in a chain of cause and > effect. ?Syntax but no semantics, as Gordon pointed out early on in > this discussion. ?But an observer, at whatever level of recursion, > will report meaning in its terms. > > It may help to consider this: > > If I ask you (or you ask yourself (Don't worry; it's recursive)) about > the redness of an apple that you are seeing, that "experience" never > occurs in real-time. ?It's always only a product of some processing > that necessarily takes some time. ?Real-time experience never happens; > it's a logical and practical impossibility, ?So in any case, the > information corresponding to the redness of that apple, its luminance, > its saturation, its flaws, its associations with the remembered red of > a fire truck, and on and on, is in effect delivered or made available, > after some delay, to another system. And that system will do whatever > it is that it will do, determined by its nature within that context. > In the case of delivery to the system (observer) that is going to find > out about that red, then the observer system will then do something > with that information (again completely determined by its nature, with > that context.) ?The observer system might remark out loud about the > redness of the apple, and remember doing so. ?It may say nothing, and > only store the new perception (of perceiving) the redness. ?A moment > later it may use that perception (from memory) again, of course linked > with newly delivered information as well. ?If at any point the nature > of the observer (within context, which might be me asking you what you > experienced) focuses attention again on information about its internal > state, the process repeats, keeping the observer process pretty well > satisfied. ? From a third-person point of view, there was never any > meaning anywhere in the system, including within the observer we just > described. ?But if you ask the observer about the experience, of > course it will truthfully report in terms of first-person experience. > What more is there to say? Searle would say that experience must be an intrinsic property of the matter causing the experience. If not, then it would be possible to get it out of one system reacting to or observing another system as you describe, which would be deriving meaning from syntax, which he believes is a priori impossible. -- Stathis Papaioannou From ddraig at gmail.com Fri Jan 8 09:26:02 2010 From: ddraig at gmail.com (ddraig) Date: Fri, 8 Jan 2010 20:26:02 +1100 Subject: [ExI] Fwd: [ctrl] Nuclear Powered Nanorobots 2Replace Food?-Robert Freitas on How Nuclear-Powered Nanobots Will Allow Us 2Forgo Eating a Square Meal for a Century In-Reply-To: <391531.41185.qm@web53408.mail.re2.yahoo.com> References: <391531.41185.qm@web53408.mail.re2.yahoo.com> Message-ID: Hiyas I am quite astonished to find this on the Conspiracy Theory Research List and not on exichat, but, there you go. Any comments in the body of the text are not mine - Dwayne And here we are: ---------- Forwarded message ---------- Subject: [ctrl] Nuclear Powered Nanorobots 2Replace Food?- Robert Freitas on How Nuclear-Powered Nanobots Will Allow Us 2Forgo Eating a Square Meal for a Century "In the future, we may see a type of pill for replacing food, but experts say it likely would not be a simple compound of chemicals. A pill-sized food replacement system would have to be extremely complex because of the sheer difficulty of the task it was being asked to perform, more complex than any simple chemical reaction could be. The most viable solution, according to many futurists, would be a nanorobot food replacement system. Dr. Robert Freitas, author of the Nanomedicine series and senior research fellow at the Institute for Molecular Manufacturing spoke with FUTURIST magazine senior editor Patrick Tucker about it.' Read ... WFS Update: Robert Freitas on How Nuclear-Powered Nanobots Will Allow Us to Forgo Eating a Square Meal for a Century Tuesday, Dec 29 2009 http://www.acceleratingfuture.com/michael/blog/2009/12/wfs-update-robert-freitas-on-how-nuclear-powered-nanobots-will-allow-us-to-forgo-eating-a-square-meal-for-a-century/ Wow, this surprised me. This is the sort of thing that I would write off as nonsense on first glance if it weren?t from Robert Freitas, who is legendary for the rigor of his calculations [http://www.nanomedicine.com/]. Here?s the bit, from a World Future Society update: The Issue: Hunger The number of people on the brink of starvation will likely reach 1.02 billion ? or one-sixth of the global population ? in 2009, according to the United Nations Food and Agriculture Organization (FAO). In the United States, 36.2 million adults and children struggled with hunger at some point during 2007. The Future: The earth?s population is projected to increase by 2.5 billion people in the next four decades, most of these people will be born in the countries that are least able to grow food. Research indicates that these trends could be offset by improved global education among the world?s developing populations. Population declines sharply in countries where almost all women can read and where GDP is high. As many as 2/3 of the earth?s inhabitants will live in water-stressed area by 2030 and decreasing water supplies will have a direct effect on hunger. Nearly 200 million Africans are facing serious water shortages. That number will climb to 230 million by 2025, according to the United Nations Environment Program. Finding fresh water in Africa is often a huge task, requiring people (mostly women and children) to trek miles to public wells. While the average human requires only about 4 liters of drinking water a day, as much as 5,000 liters of water is needed to produce a person?s daily food requirements. Futurist Fixes 1. The Food Pill. In the future, we may see a type of pill for replacing food, but experts say it likely would not be a simple compound of chemicals. A pill-sized food replacement system would have to be extremely complex because of the sheer difficulty of the task it was being asked to perform, more complex than any simple chemical reaction could be. The most viable solution, according to many futurists, would be a nanorobot food replacement system. Dr. Robert Freitas, author of the Nanomedicine series and senior research fellow at the Institute for Molecular Manufacturing spoke with FUTURIST magazine senior editor Patrick Tucker about it. In his books and various writings, Freitas has described several potential food replacement technologies that are somewhat pill-like. The key difference, however, is that instead of containing drug compounds, the capsules would contain thousands of microscopic robots called nanorobots. These would be in the range of a billionth of a meter in size so they could easily fit into a large capsule, though a capsule would not necessarily be the best way to administer them to the body. Also, while these microscopic entities would be called ?robots,? they would not necessarily be composed of metal or possess circuitry. They would be robotic in that they would be programmed to carry out complex and specific functions in three-dimensional space. One food replacement Dr. Freitas has described is nuclear powered nanorobots. Here?s how these would work: the only reason people eat is to replace the energy they expend walking around, breathing, living life, etc. Like all creatures, we take energy stored in plant or animal matter. Freitas points out that the isotope gadolinium-148 could provide much of the fuel the body needs. But a person can?t just eat a radioactive chemical and hope to be healthy, instead he or she would ingest the gadolinium in the form of nanorobots. The gadolinium-powered robots would make sure that the person?s body was absorbing the energy safely and consistently. Freitas says the person might still have to take some vitamin or protein supplements but because gadolinium has a half life of 75 years, the person might be able to go for a century or longer without a square meal. For people who really like eating but don?t like what a food-indulgent lifestyle does to their body, Freitas has two other nanobot solutions. ?Nutribots? floating through the bloodstream would allow people to eat virtually anything, a big fatty steak for instance, and experience very limited weight or cholesterol gain. The nutribots would take the fat, excess iron, and anything else that the eater in question did not want absorbed into his or her body and hold onto it. The body would pass the nurtibots, and the excess fat, normally out of the body in the restroom. A nanobot Dr. Freitas calls a ?lipovore? would act like a microscopic cosmetic surgeon, sucking fat cells out of your body and giving off heat, which the body could convert to energy to eat a bit less. Where can you read more about Robert Freitas?s ideas? In the January-February 2010 issue of THE FUTURIST magazine, Freitas lays out his ideas for improving human health through nanotechnology. Yes, there are many other technologies that could help out better with hunger right now. The most important are the three initiatives singled out by Giving What We Can as being high-leverage intervention points: schistosomiasis control, stopping tuberculosis, and the regular delivery of micronutrient packages. Another is the iodization of salt. How can these stop hunger? Well, the diseases and ill health caused by the absence of these measures is so great that alleviating them will increase the total amount of time that people have available to engage in farming, which in the short term will alleviate hunger more effectively than any direct measure. Delivering food in the form of aid fosters dependence. Anyway, the summary of Freitas? food bot ideas above seems very limited. I?m sure that Freitas has worked out the design in greater detail. For instance, are the nanobots he is talking about is powered through a radioisotope rather than a nuclear fission plant, and the text doesn?t make that clear enough, in my opinion. I wonder ? how is it that gadolinium can be broken down into all the nutrients the body needs? Wouldn?t a large amount be required, because fueling the chemical reactions of the body requires bulk and mass no matter how you slice it? I am seeing a lot of technical questions and holes in the idea, as it is brusquely presented above. I will email Freitas and ask him to point us to the proper writings. -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From stathisp at gmail.com Fri Jan 8 16:06:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 03:06:47 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <417015.40057.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/9 Gordon Swobe : > --- On Fri, 1/8/10, Stathis Papaioannou wrote: > >> You say experience affects behaviour, but you are quite >> happy with the idea that a zombie can reproduce human behaviour without >> having experience. > > Yes. > >> So what is to stop a p-neuron from behaving like a >> c-neuron despite lacking experience if nothing stops the >> zombie from acting like a human, which is arguably a much harder task? > > Nothing. But again the first surgeon who tries a partial replacement in Wernicke's area with p-neurons will run into serious complications. The p-neurons will require lots of programming and patches and so on to compensate for the patient's lack of experience, complications the surgeon did not anticipate because like you he does not realize that the p-neurons don't give the patient the experience of his own understanding. I think I see what you mean now. The generic p-neurons can't have any information about language pre-programmed, so the patient will have to learn to speak again. However, the same problem will occur with the c-neurons. In both cases the patient will have the capacity to learn language, and after some effort will both appear to learn language, equally quickly and equally well since the p-neurons and c-neurons are functionally equivalent. However, Sam will truly understand what he is saying while Cram will behave as if he understands what he is saying and believe that he understands what he is saying, without actually understanding anything. Is that right? An alternative experiment involves more advanced techniques whereby everyone's brain is continually scanned throughout their life, so that if they suffer a brain injury the damaged part can be replaced with neurons in exactly the same configuration as the originals, so that the patient does not lose memories or abilities. In this case, Sam and Cram would both wake up and immediately declare that they have had the power of language restored. -- Stathis Papaioannou From stefano.vaj at gmail.com Fri Jan 8 16:07:02 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 8 Jan 2010 17:07:02 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> 2010/1/8 Stathis Papaioannou : > He explicitly says that a machine could fully reproduce the function > of a brain but fail to reproduce the consciousness of the brain. I suspect a few list members have stopped by now to follow this thread, and I am reading it myself on and off, but I really wonder: am I really the only one who thinks this to be a contradiction in terms, not allowing any sensible answer? Or that mystical concepts of "coscience" do not bear close inspection in the first place, so making any debate on the possibility of emulation on systems different from organic brains rather moot? -- Stefano Vaj From stathisp at gmail.com Fri Jan 8 16:27:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 03:27:56 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> Message-ID: 2010/1/9 Stefano Vaj : > 2010/1/8 Stathis Papaioannou : >> He explicitly says that a machine could fully reproduce the function >> of a brain but fail to reproduce the consciousness of the brain. > > I suspect a few list members have stopped by now to follow this > thread, and I am reading it myself on and off, but I really wonder: am > I really the only one who thinks this to be a contradiction in terms, > not allowing any sensible answer? Or that mystical concepts of > "coscience" do not bear close inspection in the first place, so making > any debate on the possibility of emulation on systems different from > organic brains rather moot? At first glance it looks coherent. I really would like to know before installing such a machine in my head to replace my failing neurons whether my consciousness, whatever it is, will remain intact. It can be demonstrated to my satisfaction that the machine will function exactly the same as brain tissue, but is that enough? I want a *guarantee* that I'll feel just the same after the procedure. I think that guarantee can be provided by considering the absurd consequences should it actually be the case that brain function and consciousness are separable. -- Stathis Papaioannou From aware at awareresearch.com Fri Jan 8 16:42:08 2010 From: aware at awareresearch.com (Aware) Date: Fri, 8 Jan 2010 08:42:08 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> Message-ID: On Fri, Jan 8, 2010 at 7:48 AM, Stathis Papaioannou wrote: > 2010/1/8 Aware : > >> But if you watch carefully, he accepts functionalism, IFF the >> candidate machine/substrate actually reproduces the function of the >> brain. But then he goes on to show that for any formal description of >> any machine, there's no place IN THE MACHINE where understanding >> actually occurs. > > He explicitly says that a machine could fully reproduce the function > of a brain but fail to reproduce the consciousness of the brain. He > believes that the consciousness resides in the actual substrate, not > the function to which the substrate is put. If you want to extend > "function" to include consciousness then he is a functionalist, but > that is not a conventional use of the term. Searle is conflicted. Just not at the level you (and most others) keep focusing on. When I first read of the Chinese Room back in the early 80s my first reaction was a bit of disdain for his "obvious" lack of respect for scientific materialism. But at the same time I had the nagging thought that this is an intelligent guy, so maybe there's something more subtle going on (even though he's still wrong) and look at how the arguments just keep going around and around. The next time I came back to it, later in the 80s, it made complete sense to me (while he was still wrong but still getting a lot of mileage out of his ostensible paradox.) As I've said before on this list, paradox is always a matter of insufficient context. In the bigger picture all the pieces must fit. >> He's right about that. > > He actually *does* think there is a place in the machine where > understanding occurs, Yes, I've emphasized that mistaken premise as loudly as I could, a few times. > if the machine is a brain. or a "fully functional equivalent", WHATEVER THAT MEANS. Note that Searle, like Chalmers, does not provide any resolution, but only emphasizes "the great mystery", the "hard problem" of consciousness. >> But here he goes wrong: ?He claims that human brains obviously do have >> understanding, and suggests that he has therefore proved that there is >> something different about attempts to produce the same in machines. >> >> But there's no understanding in the human brain, either, nor any >> evidence for it, EXCEPT FOR THE REPORTS OF AN OBSERVER. > > Right. > >> We don't have understanding in our brains, but we don't need it. >> Never did. ?We have only actions, which appear (with good reason) to >> be meaningful to an observer EVEN WHEN THAT OBSERVER IDENTIFIES AS THE >> ACTOR ITSELF. > > Searle would probably say there's no observer in a computer. I agree that's what he would say. It works well with popular opinion, and keeps the discussion spinning around and around. >> Sure it's non-intuitive. ?It's Zen. ?In the true, non-bastardized >> sense of the word. And if you're gonna design an AI that displays >> consciousness, then it would be helpful to understand this so you >> don't spin your wheels trying to figure out how to implement it. > > You could take the brute force route and copy the brain. Yes, Markham is working on implementing something like that, and Kurzweil uses that as his limiting case for predicting the arrival of "human equivalent" artificial intelligence. There are complications with that "obvious" approach, but I have no desire to embark on another, likely fruitless, thread at this time. >>>> So I suggest (again) to you and Gordon, and Searle. that you need to >>>> broaden your context. ?That there is no essential consciousness in the >>>> system, but in the recursive relation between the observer and the >>>> observed. Even (or especially) when the observer and observed are >>>> functions of he same brain, you get self-awareness entailing the >>>> reported experience of consciousness, which is just as good because >>>> it's all you ever really had. >> described. ?But if you ask the observer about the experience, of >> course it will truthfully [without deception] report in terms of first-person experience. >> What more is there to say? > > Searle would say that experience must be an intrinsic property of the > matter causing the experience. If not, then it would be possible to > get it out of one system reacting to or observing another system as > you describe, which would be deriving meaning from syntax, which he > believes is a priori impossible. As far as I know, he does NOT say that "experience" (qualia/meaning/intentionality/consciousness/self/free-will) must be an intrinsic property of the matter. He appears content to present it as a great mystery, one that quite conveniently pushes people's buttons by appearing on one side to elevate the status of humans as somehow possessing a special quality, and on the other side by offending the righteous sensibilities of those who feel they must defend scientific materialism. It's all good for Searle as the debate swirls around him and around and around... - Jef From stefano.vaj at gmail.com Fri Jan 8 16:59:03 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 8 Jan 2010 17:59:03 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <191400.54764.qm@web36501.mail.mud.yahoo.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> Message-ID: <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> 2010/1/8 Stathis Papaioannou : > At first glance it looks coherent. I really would like to know before > installing such a machine in my head to replace my failing neurons > whether my consciousness, whatever it is, will remain intact. As long as you "wrongly" believe to be conscious after such install, what difference does it make? Not only "practical" difference, mind, even *merely logical* difference... Everybody is a Dennett's zimbo, nobody is... -- Stefano Vaj From thespike at satx.rr.com Fri Jan 8 17:06:31 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 08 Jan 2010 11:06:31 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) In-Reply-To: <201001080840.o088ec6A009055@andromeda.ziaspace.com> References: <201001080840.o088ec6A009055@andromeda.ziaspace.com> Message-ID: <4B476617.8060800@satx.rr.com> On 1/8/2010 2:40 AM, Max More wrote: > It maybe that my resistance to claims of psi phenomena are just sour > grapes, since in my own life I've never observed the slightest hint of > psychic events or abilities. Good post, Max. thanks. I have seen very little evidence myself *in my own life*--but I seem to be pretty much the contrary of the type of temperament that seems to function "psychically". I can't see very well either, but that doesn't make me disbelieve in sight. > However, past experience makes me extremely reluctant to devote > significant time to looking at new evidence (esp. when so much previous > new evidence ended up looking bad). I'm not sure that's true if one sticks to rigorous tests of fairly minimal claims. I hope it's obvious that I'm not all puppy dog excited about solar astrology, dowsing, Rosicrucian secrets from Atlantis, ghosts that clank in the night, "psychotronic weapons", Mayan 2012 apocalyptic prophecies, and lots of other inane topics that fill the Coast-to-Coat airwaves. > I would like your book on the topic, Damien. But, given my past > experience and the apparently minor nature of claimed results, it's just > not likely that it's going to be a top priority. I know that's annoying > and frustrating, but I hope you can understand why I see it that way > (and, I suspect, quite a few other people on this list). I do understand that, of course, but what offends yet also grimly amuses me is the conditioned reflex scorn--the sort of thing our friend John Clark specializes in--that complacently dismisses years of careful work without knowing the first thing about it. As you say, you read a lot on this topic when you were a kid, tried some magick, etc, so obviously you don't fall into this category--or not quite, because I suspect you're still a victim of premature closure. I know how that works, because I was in the same boat for years. I was enthusiastic about psi claims as a young adolescent, mostly from reading sf editorials about Rhine etc, then stopped taking it seriously as a university student. I read all the pop-critical books whacking away at the loonies, the Scientologists, etc, with great relish. Then when I was nearing 30 I got interested again after reading a paper about a university study that had worked, and came up with some approaches that seemed promising. (Years later I found out that the same ideas were being explored at the same time, or a bit later, by the well-funded CIA and military researchers in what was eventually known as Star Gate.) Curious, but unable to afford massive research, I went back to old published data and saw that when some elementary information theory was applied to it, out popped rather startling indications that psi was real after all. This was especially impressive when it showed up in data from experiments that had apparently failed. (If suppressing "negative results" had been the rule, I'd never have seen this data; luckily, parapsychologists in the 1930s and 1950s were often prepared to publish what looked like failed experiments.) Subsequently, no-one was more surprised that I to discover that serious "remote viewing" claims--Joe McMoneagle's and Stephan Schwartz's, say--were often corroborated (despite the encrustation of bogosity from scammers now claiming falsely to have been big wheels in Star Gate). So why aren't psychics rich? Why is Osama still running free? (Gee, who would gain from that?) Why do we bother with cars instead of levitating? Good questions, but then if there are antibiotics why do people still get sick, and if there's dark matter why isn't there a really good theory to explain it, and on and on. Damien Broderick From pjmanney at gmail.com Fri Jan 8 17:47:04 2010 From: pjmanney at gmail.com (PJ Manney) Date: Fri, 8 Jan 2010 09:47:04 -0800 Subject: [ExI] H+ Magazine blog on Bitch Slap Message-ID: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com> For a bit of shameless, self-promotional fun today: http://www.hplusmagazine.com/editors-blog/how-bitch-slap-will-bring-about-singularity PJ From jrd1415 at gmail.com Fri Jan 8 18:25:24 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 8 Jan 2010 11:25:24 -0700 Subject: [ExI] Psi (no need to read this post you already know what itsays ) In-Reply-To: <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: Isabelle Hakala : > Please try to keep these discussions civil, as we want to encourage people > to share their opinions without feeling attacked by others, otherwise we > will not have a diversity of opinions, which is needed to stretch our > capacity for reasoning. > > -Isabelle Hear, hear! (or is it Here, here! Whatever.) Welcome to the list, Isabelle. And of course, your reminder re the benefits of civility is always,...well... worth reminding... others...of. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From jonkc at bellsouth.net Fri Jan 8 18:40:45 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 8 Jan 2010 13:40:45 -0500 Subject: [ExI] Psi. (no need to read this post you already know what itsays ) In-Reply-To: <4B464EEF.1000200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <4B464EEF.1000200@satx.rr.com> Message-ID: <6EA6CEF2-439B-49A1-BBC8-0ACA23256DF2@bellsouth.net> On Jan 7, 2010, Damien Broderick wrote: > Some refuse even to consider evidence when it's provided (John Clark, say, who proudly declares that he won't look at anything pretending to be evidence for psi, since he knows a priori that it's BULLSHIT!!!). That is only partially true, I'm more than willing to look at evidence provided it really is evidence. However its true I'm not willing to look at the "evidence" posted on a website by somebody I've never heard of because there is no web of trust between me the reader and the originator of this "evidence" as there is in a legitimate Scientific journal. As a result the only thing stuff like this is really evidence for is that somebody knows how to type. At the start of every year for the last 10 years I've made a paranormal prediction for the coming year; I've predicted that a positive Psi (or ESP or spiritualism) article will NOT appear in Nature or Science or Physical Review Letters for the next year, and I've been proven right each and every year; I must be psychic. I made an identical prediction for this year. Anybody want to bet against me? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Jan 8 19:08:52 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 08 Jan 2010 13:08:52 -0600 Subject: [ExI] Psi. (no need to read this post you already know what itsays ) In-Reply-To: <6EA6CEF2-439B-49A1-BBC8-0ACA23256DF2@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <4B464EEF.1000200@satx.rr.com> <6EA6CEF2-439B-49A1-BBC8-0ACA23256DF2@bellsouth.net> Message-ID: <4B4782C4.5010408@satx.rr.com> On 1/8/2010 12:40 PM, John Clark wrote: > At the start of every year for the last 10 years I've made a paranormal > prediction for the coming year; I've predicted that a positive Psi (or > ESP or spiritualism) article will NOT appear in Nature or Science or > Physical Review Letters for the next year No, you've made a sociologically astute prediction based on your implicit knowledge of the reigning paradigm prejudices of those journals. I can make a similar prediction: if any scientist known to be critical or skeptical of psi-claims does publish a replication paper (in a peer-reviewed journal willing to print it) supporting the reality of such phenomena, he or she will immediately become known and mocked as a "believer" and ignored by all decent right-thinking scientists, at least on that topic. (This happens routinely in science, for understandable reasons. An example discussed recently in the NYT, IIRC, was a woman whose work on epigenetic effects was dropped scornfully into the trash bin in front of her by a senior scientist she showed it to, hindering her work for many years.) Damien Broderick From jonkc at bellsouth.net Fri Jan 8 18:43:59 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 8 Jan 2010 13:43:59 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <684147.29925.qm@web36506.mail.mud.yahoo.com> References: <684147.29925.qm@web36506.mail.mud.yahoo.com> Message-ID: <6CA3F4EB-CE15-4444-9A4C-22360E3CDB14@bellsouth.net> On Jan 8, 2010, Gordon Swobe wrote: > the presence of oxygen clearly plays a role in whatever physical conditions must exist in the brain for it to have subjective experience. So without oxygen you will die, well it may not be profound but unlike most of your utterances at least it is true. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 8 18:51:39 2010 From: spike66 at att.net (spike) Date: Fri, 8 Jan 2010 10:51:39 -0800 Subject: [ExI] H+ Magazine blog on Bitch Slap In-Reply-To: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com> References: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com> Message-ID: <55B7EDA404DD4D778F27C240C759FC43@spike> > ...On Behalf Of PJ Manney ... > Subject: [ExI] H+ Magazine blog on Bitch Slap > > For a bit of shameless, self-promotional fun today: > > http://www.hplusmagazine.com/editors-blog/how-bitch-slap-will- > bring-about-singularity > > PJ Hmmm. PJ, I think I'll skip the whole Bitch Slap scene, thanks. The dreamy Nolan Sisters are more my style. They cause me to struggle for life extension just so I can preserve them for future generations. {8^D spike From pharos at gmail.com Fri Jan 8 19:25:04 2010 From: pharos at gmail.com (BillK) Date: Fri, 8 Jan 2010 19:25:04 +0000 Subject: [ExI] H+ Magazine blog on Bitch Slap In-Reply-To: <55B7EDA404DD4D778F27C240C759FC43@spike> References: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com> <55B7EDA404DD4D778F27C240C759FC43@spike> Message-ID: On 1/8/10, spike wrote: > Hmmm. PJ, I think I'll skip the whole Bitch Slap scene, thanks. The dreamy > Nolan Sisters are more my style. They cause me to struggle for life > extension just so I can preserve them for future generations. {8^D > > In the UK, at the height of their fame in the 1970s just about every young male of that era went to bed dreaming about being attacked by the Nolan sisters. ;) BillK From spike66 at att.net Fri Jan 8 19:22:30 2010 From: spike66 at att.net (spike) Date: Fri, 8 Jan 2010 11:22:30 -0800 Subject: [ExI] Psi (no need to read this post you already know whatitsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com><7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM><79471C131D7F4EE28EE05A725ED29AED@spike><4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> Message-ID: <3B410568F34D40948CD9713FAFCC024B@spike> > ...On Behalf Of Jeff Davis > ... > Isabelle Hakala : > > > Please try to keep these discussions civil... -Isabelle > > Hear, hear! (or is it Here, here! Whatever.) Neither Jeff, but rhather: Hear Here. Think about it, makes perfect sense. (Otherwise it would have been Hear squared.) Back in the old days before sound amplification, government was accomplished by a bunch of politicians in a single room discussing some topic. Perhaps several were talking at the same time. When someone uttered some noteworthy comment, a bystander would say Hear Here! to call the attention of the others. The rhyme makes it better than Listen Here. > Welcome to the list, Isabelle. And of course, your reminder > re the benefits of civility is always,...well... worth reminding... > others...of. Best, Jeff Davis Read here! ^^^^ spike From spike66 at att.net Fri Jan 8 20:30:08 2010 From: spike66 at att.net (spike) Date: Fri, 8 Jan 2010 12:30:08 -0800 Subject: [ExI] H+ Magazine blog on Bitch Slap In-Reply-To: References: <29666bf31001080947u6733fdebj56c544e0276b805b@mail.gmail.com><55B7EDA404DD4D778F27C240C759FC43@spike> Message-ID: <4E297F8025CE417EAC49376D5F8A48FA@spike> > ...On Behalf Of BillK > Subject: Re: [ExI] H+ Magazine blog on Bitch Slap > > On 1/8/10, spike wrote: > > ...The dreamy Nolan Sisters are more my style... spike > > In the UK, at the height of their fame in the 1970s just > about every young male of that era went to bed dreaming about > being attacked by the Nolan sisters. ;) BillK If you meant attacked in the sexual sense, that would be quite unlikely, for it is impossible to rape the willing. Nay, far beyond willing, eager. Strange that I had never heard of them. Perhaps they never did much on American television or radio. Other yanks, are the Nolans new to you? YouTube forms a kind of time machine. SF writers of the past did not envision time travel in this indirect manner, but it might even have some advantages over actual time travel: we get the Nolans free, without having to go back to 1978 and buy a ticket to their concert or suffer through commercial messages on TV. I am now thinking there is something magic about Ireland. That island has produced at least two groups of five stunning beauties: the Nolan sisters in the 1970s and three decades later the Celtic Women: http://www.youtube.com/watch?v=LHOyPLSVam4&feature=related The youngest of these in this video, the stunning Hayley Westenra, is actually from Australia, another island continen fifth Celtic Woman not shown here, M?ir?ad Nesbitt, who is not only to-stay-alive-for* gorgeous, but is also a monster talent on the fiddle and a lithe dancer. Check out the simultaneous dancing, fiddling, and being beautiful: http://vids.myspace.com/index.cfm?fuseaction=vids.individual&videoID=2022329 931 One's fondest dream could never do justice to these. The US has a much larger population, but has not produced or enjoyed such native talent as Ireland and Australia since Karen Carpenter perished on the darkly memorable day, 4 February 1983. spike *The more common expression to-die-for gorgeous makes no sense. Rather the opposite: picture a dying patient making the choice between continuing the painful radiation and chemotherapy or to let nature take its course. Chancing to see M?ir?ad or for that matter Hayley or any of the other Celtic Women, the patient may reverse course and decide it is worth it to have a little more time to enjoy these to-stay-alive-for gorgeous ladies. From gts_2000 at yahoo.com Fri Jan 8 23:26:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 15:26:45 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <366490.44362.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/8/10, Stathis Papaioannou wrote: > I think I see what you mean now. The generic p-neurons > can't have any information about language pre-programmed, so the patient > will have to learn to speak again. However, the same problem will occur > with the c-neurons. Replacement with c-neurons would work in straightforward manner even supposing the patient might need to relearn language. But with p-neurons he will have no experience of understanding words even after his surgeon programs them. And because the experience of understanding words affects the behavior of neurons associated with that understanding, our surgeon/programmer of p-neurons faces a tremendous challenge, one that his c-neuron replacing colleagues needn't face. > However, Sam will truly understand what he is saying while Cram will > behave as if he understands what he is saying and believe that he > understands what he is saying, without actually > understanding anything. Is that right? He will behave outwardly as if he understands words but he will not "believe" anything. He will have weak AI. -gts From gts_2000 at yahoo.com Fri Jan 8 23:57:09 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 8 Jan 2010 15:57:09 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: Message-ID: <892247.48619.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/8/10, Aware wrote: > But there's no understanding in the human brain, > either, nor any evidence for it, EXCEPT FOR THE REPORTS OF AN > OBSERVER. And what do you think that observer is reporting about, Jef? Without your self-reported observations of your own understanding, you would lack intentionality. You would have only weak AI. > We don't have understanding in our brains, but we > don't need it. Absurd. I suppose you think your sentence above has meaning, and that you understand it. I suppose also that if I removed certain parts of your brain or impaired it with drugs then you would cease to understand it. Sure seems to me that you understand words in your brain. -gts From aware at awareresearch.com Sat Jan 9 00:35:36 2010 From: aware at awareresearch.com (Aware) Date: Fri, 8 Jan 2010 16:35:36 -0800 Subject: [ExI] Some new angle about AI In-Reply-To: <892247.48619.qm@web36508.mail.mud.yahoo.com> References: <892247.48619.qm@web36508.mail.mud.yahoo.com> Message-ID: On Fri, Jan 8, 2010 at 3:57 PM, Gordon Swobe wrote: > --- On Fri, 1/8/10, Aware wrote: > >> But there's no understanding in the human brain, >> either, nor any evidence for it, EXCEPT FOR THE REPORTS OF AN >> OBSERVER. > > And what do you think that observer is reporting about, Jef? > > Without your self-reported observations of your own understanding, you would lack intentionality. You would have only weak AI. > >> We don't have understanding in our brains, but we >> don't need it. > > Absurd. I suppose you think your sentence above has meaning, and that you understand it. I suppose also that if I removed certain parts of your brain or impaired it with drugs then you would cease to understand it. Sure seems to me that you understand words in your brain. It's ironic how you strip everything I wrote down to a little sound bite--removing the context--so you can point to it and call it absurd. Ironic because appreciating the importance and role of context is key to resolving your puzzle. Have fun. My holiday visit to Extropy-chat has about run its course. I've been well-reminded that there's little benefit to continued participation so I'll simply remain Aware in the background and perhaps check in with you later. - Jef From stathisp at gmail.com Sat Jan 9 04:41:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 15:41:56 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <366490.44362.qm@web36508.mail.mud.yahoo.com> References: <366490.44362.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/9 Gordon Swobe : >> However, Sam will truly understand what he is saying while Cram will >> behave as if he understands what he is saying and believe that he >> understands what he is saying, without actually >> understanding anything. Is that right? > > He will behave outwardly as if he understands words but he will not "believe" anything. He will have weak AI. The patient was not a zombie before the operation, since most of his brain was functioning normally, so why would he be a zombie after? Before the operation he sees that people don't understand him when he speaks, and that he doesn't understand them when they speak. He hears the sounds they make, but it seems like gibberish, making him frustrated. After the operation, whether he gets the p-neurons or the c-neurons, he speaks normally, he seems to understand things normally, and he believes that the operation is a success as he remembers his difficulties before and now sees that he doesn't have them. Perhaps you see the problem I am getting at and you are trying to get around it by saying that Cram would become a zombie. But by what mechanism would the replacement of only a few neurons negate the consciousness of the rest of the brain? -- Stathis Papaioannou From rafal.smigrodzki at gmail.com Sat Jan 9 06:44:42 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 9 Jan 2010 01:44:42 -0500 Subject: [ExI] ancap propaganda Message-ID: <7641ddc61001082244g374e2998w37aa5b742f878f62@mail.gmail.com> I wrote a way-too-long post on anarchocapitalism (under the name polycentric law, order and defense) http://triviallyso.blogspot.com/2010/01/of-beating-hearts-part-2.html Comments welcome. Rafal From stathisp at gmail.com Sat Jan 9 10:00:01 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jan 2010 21:00:01 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> Message-ID: 2010/1/9 Stefano Vaj : > 2010/1/8 Stathis Papaioannou : >> At first glance it looks coherent. I really would like to know before >> installing such a machine in my head to replace my failing neurons >> whether my consciousness, whatever it is, will remain intact. > > As long as you "wrongly" believe to be conscious after such install, > what difference does it make? > > Not only "practical" difference, mind, even *merely logical* difference... > > Everybody is a Dennett's zimbo, nobody is... I agree with you, of course. The zombie neurons are just as good as real neurons in every respect, so there is really no basis in distinguishing them as zombie neurons. But this needs to be carefully explained; if it were obvious, Gordon would not still be arguing. -- Stathis Papaioannou From bbenzai at yahoo.com Sat Jan 9 17:41:11 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 9 Jan 2010 09:41:11 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <298569.9122.qm@web113602.mail.gq1.yahoo.com> In 'Are We Spiritual Machines?: Ray Kurzweil vs. the Critics of Strong AI', John Searle says: "Here is what happened inside Deep Blue. The computer has a bunch of meaningless symbols that the programmers use to represent the positions of the pieces on the board. It has a bunch of equally meaningless symbols that the programmers use to represent options for possible moves." This is a perfect example of wny I can't take the guy seriously. He talks about 'meaningless' symbols, then goes on to describe what those symbols mean! He is *explicitly* stating that two sets of symbols represent positions on a chess board, and options for possible moves, respectively, while at the same time claiming that these symbols are meaningless. wtf? Does he even read what he writes? I'm baffled that anyone can write something like this, and not immediately delete it, looking round in embarrassment hoping that nobody saw it. What kind of 'meaning' can a symbol have, other than what it represents in the context in which it appears? If, in Deep Blue, the number hA45 stored in a particular memory location is used to represent "White Queen on square 4:6", that is its meaning to Deep Blue. Just as a specific pattern of neuron firings in a certain part of my brain represents the taste of chocolate ice-cream. If that's a 'meaningless symbol', then I must think using meaningless symbols. It seems to work just fine, so if "meaningless" (in this context) means anything, it's irrelevant to the functioning of at least one mind. Maybe I'm a zombie! (If a zombie realises it's a zombie, does that mean it's cured?) Ben Zaiboc From gts_2000 at yahoo.com Sat Jan 9 18:11:55 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 9 Jan 2010 10:11:55 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <606713.51772.qm@web36508.mail.mud.yahoo.com> --- On Fri, 1/8/10, Stathis Papaioannou wrote: >> He will behave outwardly as if he understands words >> but he will not "believe" anything. He will have weak AI. > > The patient was not a zombie before the operation, since > most of his brain was functioning normally, so why would he be a zombie > after? To believe something one must have an understanding of the meaning of the thing believed in, and I have assumed from the beginning of our experiment that the patient presents with no understanding of words, i.e., with complete receptive aphasia from a broken Wernicke's. I don't believe p-neurons will cure his aphasia subjectively, but I think his surgeon will eventually succeed in programming him to behave outwardly like one who understands words. After leaving the hospital, the patient might tell you he believes in Santa Claus, but he won't actually "believe" in it; that is, he won't have a conscious subjective understanding of the meaning of "Santa Claus". > Before the operation he sees that people don't understand > him when he speaks, and that he doesn't understand them when they > speak. He hears the sounds they make, but it seems like gibberish, making > him frustrated. After the operation, whether he gets the > p-neurons or the c-neurons, he speaks normally, he seems to understand > things normally, and he believes that the operation is a success as he > remembers his difficulties before and now sees that he doesn't have > them. Perhaps he no longer feels frustrated but still he has no idea what he's talking about! > Perhaps you see the problem I am getting at and you are > trying to get around it by saying that Cram would become a zombie. I have only this question unanswered in my mind: "How much more complete of a zombie does Cram become as a result of the surgeon's long and tedious process of reprogramming his brain to make him seem to function normally despite his inability to experience understanding? When the surgeon finally finishes with him such that he passes the Turing test, will the patient even know of his own existence?" -gts From max at maxmore.com Sat Jan 9 18:22:42 2010 From: max at maxmore.com (Max More) Date: Sat, 09 Jan 2010 12:22:42 -0600 Subject: [ExI] Avatar: misanthropy in three dimensions Message-ID: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Avatar: misanthropy in three dimensions http://www.spiked-online.com/index.php/site/earticle/7895/ -- Comments from anyone who has seen the movie? (I haven't yet.) Max From jonkc at bellsouth.net Sat Jan 9 18:48:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 9 Jan 2010 13:48:33 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46B6DF.50504@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> Message-ID: <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> On Jan 7, 2010, Damien Broderick wrote: > I can't easily imagine this being acceptable on ExIchat if someone was trying to laugh away/explain away results of professional stem cell work If a high school dropout who worked as the bathroom attendant at the zoo had a website and claimed to have made a major discovery about stem cells from an experiment described on that website I would not bother to read it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 9 18:21:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 9 Jan 2010 13:21:56 -0500 Subject: [ExI] Psi. (no need to read this post you already know what itsays ) In-Reply-To: <4B4782C4.5010408@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <4B464EEF.1000200@satx.rr.com> <6EA6CEF2-439B-49A1-BBC8-0ACA23256DF2@bellsouth.net> <4B4782C4.5010408@satx.rr.com> Message-ID: <6ACAEFFA-771E-443B-8517-22914D2F560C@bellsouth.net> On Jan 8, 2010 Damien Broderick wrote: >> At the start of every year for the last 10 years I've made a paranormal >> prediction for the coming year; I've predicted that a positive Psi (or >> ESP or spiritualism) article will NOT appear in Nature or Science or >> Physical Review Letters for the next year > > No, you've made a sociologically astute prediction based on your implicit knowledge of the reigning paradigm prejudices of those journals. So you think the reason these 3 journals (and I could easily extend my prediction to several dozen respectable journals) don't publish Psi stuff has all to do with sociology and nothing to do with Science. I think they don't publish Psi papers because what they contain conflicts with reality. On my side I have journals that have published every major scientific discovery of the 20'th century, on your side you have some bozo nobody ever heard of who typed some stuff onto a website or onto a dead tree that also nobody has ever heard of. > This happens routinely in science Not like this it doesn't! Sure on a few rare occasions somebody comes up with a correct idea that wasn't fully accepted for a long time, the most extreme example of that I can think of is continental drift, but even when it was in the minority the support for it never dropped to zero in the scientific community as it has for Psi. And the truth is that the evidence Wegener gave to support his theory was pretty weak and would remain weak until the 1960's. When the evidence did become good the Journals I mention didn't refuse to print it, far from it, they competed madly with each other to be the first to publish more about this wonderful new discovery. If the confirmation of Psi became as strong as it was for continental drift was in the 60's the same thing would happen, but that didn't happen last year, it won't happen next year, it won't happen next decade and it won't happen next century. And for every Wegener who was incorrectly labeled a crackpot there were tens of thousands who really were crackpots. Damien, we last went into this Psi stuff a couple of years ago, in that time objects 13 billion light years away have been observed, microprocessors have become 5 or 6 times as powerful, Poincar? conjecture was proven and the genome of hundreds of organisms have been sequenced; and what advances has the science of Psi achieved in that time? Zero, zilch, goose egg. Well over a century ago, long before the discovery of Quantum Mechanics or Relativity and even before Evolution and the Electromagnetic Theory of Light were generally accepted, people were saying Science was too hidebound to accept the existence of Psi and they are saying the exact same thing today as I'm certain they will be saying next century. > An example discussed recently in the NYT, IIRC, was a woman whose work on epigenetic effects was dropped scornfully into the trash bin in front of her by a senior scientist she showed it to, hindering her work for many years. Did it hinder her work for centuries? And if it was me I would have made a copy. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Jan 9 18:54:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 9 Jan 2010 10:54:11 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: <298569.9122.qm@web113602.mail.gq1.yahoo.com> Message-ID: <575831.58228.qm@web36504.mail.mud.yahoo.com> --- On Sat, 1/9/10, Ben Zaiboc wrote: > In 'Are We Spiritual Machines?: Ray > Kurzweil vs. the Critics of Strong AI', John Searle says: > > "Here is what happened inside Deep Blue. The computer has a > bunch of meaningless symbols that the programmers use to > represent the positions of the pieces on the board. It has a > bunch of equally meaningless symbols that the programmers > use to represent options for possible moves." > > > This is a perfect example of why I can't take the guy > seriously.? He talks about 'meaningless' symbols, then > goes on to describe what those symbols mean! He is > *explicitly* stating that two sets of symbols represent > positions on a chess board, and options for possible moves, > respectively, while at the same time claiming that these > symbols are meaningless.? wtf?? Human operators ascribe meanings to the symbols their computers manipulate. Sometimes humans forget this and pretend that the computers actually understand the meanings. It's an understandable mistake; after all it sure *looks* like computers understand the meanings. But then that's what programmers do for a living: we program dumb machines to make them look like they have understanding. The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" And Searle's answer is: "It won't happen from running formal syntactical programs on hardware as we do today, because computers and their programs cannot and will never get semantics from syntax. We'll need to find another way. And we know we can do it even if we don't yet know the way. After all nature did it in these machines we call humans." -gts From sparge at gmail.com Sat Jan 9 18:56:02 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 9 Jan 2010 13:56:02 -0500 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: On Sat, Jan 9, 2010 at 1:22 PM, Max More wrote: > > -- Comments from anyone who has seen the movie? (I haven't yet.) He's got the facts right, and clearly the movie isn't intended to paint the human race as faultless, but I think he's making too much out of it. This is primarily entertainment, not propaganda. It's fantasy. But it does mirror certain historical events. Wanting to be something better than human is a basic Extropian notion but the reviewer seems to think that it's akin to heresy. I definitely recommend that everyone catch Avatar in 3D in a theater. It's stunning--not just the CGI, which is fantastic, but also the human creativity that imagined it and brought it to life. It's a familiar story, but it's told very well and in a novel setting. -Dave From jonkc at bellsouth.net Sat Jan 9 18:37:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 9 Jan 2010 13:37:33 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46A73E.2090200@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> Message-ID: <5F3842E7-3035-4CCD-90FC-FC44F1FFD539@bellsouth.net> On Jan 7, 2010, Damien Broderick wrote: > If you're assuming a lack of probity in the experimenter, why not just say the whole thing was scripted or made up and have done with it? That sounds like a fine idea to me. > This is the bottom line with most skeptical retorts. What would it take to dispose of this canard? If the Psi miracle came from somebody with a reputation for being an outstanding experimental scientist then I would start to get interested; if his extraordinary results were duplicated by other experimentalists that I respected then I would be convinced; this hasn't happened yet, it hasn't even come close to happening. And no ASCII sequence posted on a obscure website claiming an experimental breakthrough could do that. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 9 19:20:58 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 13:20:58 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> Message-ID: <4B48D71A.6010806@satx.rr.com> On 1/9/2010 12:48 PM, John Clark wrote: > If a high school dropout who worked as the bathroom attendant at the zoo > had a website and claimed to have made a major discovery about stem > cells from an experiment described on that website I would not bother to > read it. Neither would I, probably. When a biochemistry PhD and Research Fellow of the Royal Society, like Sheldrake, does so, I'd be less quick to dismiss his scientific report. What are your equivalent credentials, John? (Not that this is an important point, and verges on the ad hominem, but you're the one who keeps introducing imaginary and demeaning dropouts in trailer parks as the supposed source of everything you dismiss.) Damien Broderick From gts_2000 at yahoo.com Sat Jan 9 19:54:47 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 9 Jan 2010 11:54:47 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <855333.87547.qm@web36505.mail.mud.yahoo.com> Stathis, You have mentioned on a couple of occasions that you think I must believe that the brain does something that does not lend itself to computation. I made a mental note to myself to try to figure out why you say this. I had planned to go through your messages again, but instead I'll try to address what I think you may have meant. Assume we know everything we can possibly know about the brain and that we use that knowledge to perfectly simulate a conscious brain on a computer. Even though I believe everything about the brain lends itself to computation, and even though I believe our hypothetical simulation in fact computes everything possible about a real conscious brain, I still also say that our simulation will have no subjective experience. Perhaps you want to know how can I say this without assigning some kind of strange non-computable aspect to natural brains. You may want to know how I can say this without asserting mind/matter duality or some other mystical concept to explain subjective experience. Understandable questions. The answer is that I say it because I don't believe the brain is actually computer. Some people seem to think that if we can compute X on a computer then a computer simulation of X must equal X. But that's just a blatant non sequitur. -gts From msd001 at gmail.com Sat Jan 9 20:11:10 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 9 Jan 2010 15:11:10 -0500 Subject: [ExI] Meaningless Symbols In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> On Sat, Jan 9, 2010 at 1:54 PM, Gordon Swobe wrote: > The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" How can we make people actually understand the meanings and not merely appear to understand the meanings? To the degree that it/you/anyone serves my purpose, I don't care what it/you/they "understand" as long as the appropriate behavior is displayed. Why is that so difficult to grasp? When I tell me dog "Sit" and it sits down, I don't need to be concerned about the dog's qualia or a platonic concept of sitting - only that the dog does what I want. If I tell a machine to find a worthy stock for investment, that's what I expect it should do. I would even be happy to entertain follow-up conversation with the machine regarding my opinion of "worthy" and my long-term investment goals - just like I would expect with a real broker or investment advisor. At another level of conversational interaction with proposed AGI, it might start asking me novel questions for the purpose of qualifying its model of my expected behavior. At that point, how can any of us declare that the machine doesn't have 'understanding' of the data it manages? Why would we? Understanding is highly overrated. Many people stumble through their lives with only a crude approximation of what is going on around them - and it works. From thespike at satx.rr.com Sat Jan 9 20:13:39 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 14:13:39 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <855333.87547.qm@web36505.mail.mud.yahoo.com> References: <855333.87547.qm@web36505.mail.mud.yahoo.com> Message-ID: <4B48E373.9030607@satx.rr.com> On 1/9/2010 1:54 PM, Gordon Swobe wrote: > how can I say this without assigning some kind of strange non-computable aspect to natural brains. > The answer is that I say it because I don't believe the brain is actually [a] computer. Isn't that exactly saying that you assign some kind of non-computable aspect to natural brains? (No reason why it should be strange, though.) As I said several days ago, a landslide doesn't seem to me to compute the trajectories of all its particles--at least not in any sense that I'm familiar with. We can *model* the process with various degrees of accuracy using equations, but it looks like a category mistake to suppose that the nuclear reactions in the sun are *calculating* what they're doing. I realize that Seth Lloyd and others disagree (or I think that's what he's saying in PROGRAMMING THE UNIVERSE--that the universe is *calculating itself*) but the whole idea of calculation seems to me to imply a compression or reduction of the mapping of some aspects of one large unwieldy system onto another extremely stripped-down toy system. That might be wrong, I know. I hope Gordon knows it might be wrong as well. Damien Broderick From emlynoregan at gmail.com Sat Jan 9 20:51:17 2010 From: emlynoregan at gmail.com (Emlyn) Date: Sun, 10 Jan 2010 07:21:17 +1030 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> 2010/1/10 Max More : > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) > > Max Slight spoilers ahead, although spoilers don't really matter with regard to this movie. First, see the movie, in 3D. Disregard all the reviews, and just go see it. When people say "the effects are great", they are seriously underselling. What makes Avatar such a monumental achievement is the animated fantasy world (Pandora), which outstrips anything we've seen before in terms of sheer visual lushness, creativity, imagination, beauty. Clearly, it's the best work of the best people using the best technology money can buy. The story could have been about how great it was to club baby seals, and everyone would still rave about it. See it, and be proud to live in 2010. (A slight amendment to this; my wife was unimpressed with it. She's not really a visual person, she's primarily aural mode. There's nothing in this movie for aural people) Regarding the story, you really can't read as much into it as this reviewer. It's skeletal. I think really there's so much meat on the bones of the setting, the visual environment, that they just couldn't afford to also tell a sophisticated story; you'd just not be able to take it in. Also, it's a movie designed for mass appeal, so the story is intentionally dumbed down. Even so, it's too much for some people; a friend was telling me that he sat in front of a woman who kept going "who's that, why are they doing that?" all the way through; she couldn't understand the correspondence between the humans and their avatars. The genre is fantasy action movie. It's got a science fiction setting, including elements with amazing potential, and mostly wastes all of that. As an action movie, it needs that coarse "here's the hero, here's the problem, here's the point of no return, now let's fight". The plot itself, as many have said, is Pocahontas / Dances with Wolves / The Last Samurai, etc, except that the native people win in the end. In fact, I'd say the biggest foil to that reviewer's complaint about the misanthropy is that you can't really believe the ending, because we know from our history that it just doesn't work that way - I imagined them being nuked from orbit 5 minutes after the end of the film. The misanthropy charge generally; it misses the point. In all the films that the reviewer mentions (and films like it), the point is not that humans are bad, and other stuff is good. It's more that the class of things we think of as people is larger than just those who look like and are encultured like us. To get that point across, the story tellers juxtapose people unlike us, with people like us, making the former the good guys and the latter the bad guys, to say to us that we should judge people by their behaviour, not by their tribal affiliations. It's a very straightforward left-oriented message (vs the right's intuitions about kinship, duty, loyalty, which are all about in-group). But generally it's a bit embarrassing to be overly offended or enthused by this story. It's just not got enough substance for that. Complain about the lack of sophistication (Movie in 3D, story in 2D), but the politics? Really? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From olga.bourlin at gmail.com Sat Jan 9 21:19:16 2010 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Sat, 9 Jan 2010 13:19:16 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: I haven't see Avatar either, but this plot sounds oh-so-familiar (and the last paragraph is significant): http://www.nytimes.com/2010/01/08/opinion/08brooks.html Patrick and I are movie fans, and "entertainment" is only one aspect of what we find interesting about movies. What movies reveal to us about the time in which they were produced (by which director and from what country) is what's often the more fascinating tale. Olga On Sat, Jan 9, 2010 at 10:22 AM, Max More wrote: > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) > > Max > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Jan 9 21:45:48 2010 From: pharos at gmail.com (BillK) Date: Sat, 9 Jan 2010 21:45:48 +0000 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4B48E373.9030607@satx.rr.com> References: <855333.87547.qm@web36505.mail.mud.yahoo.com> <4B48E373.9030607@satx.rr.com> Message-ID: On 1/9/10, Damien Broderick wrote: > Isn't that exactly saying that you assign some kind of non-computable > aspect to natural brains? (No reason why it should be strange, though.) As I > said several days ago, a landslide doesn't seem to me to compute the > trajectories of all its particles--at least not in any sense that I'm > familiar with. We can *model* the process with various degrees of accuracy > using equations, but it looks like a category mistake to suppose that the > nuclear reactions in the sun are *calculating* what they're doing. I realize > that Seth Lloyd and others disagree (or I think that's what he's saying in > PROGRAMMING THE UNIVERSE--that the universe is *calculating itself*) > but the whole idea of calculation seems to me to imply a compression or > reduction of the mapping of some aspects of one large unwieldy system > onto another extremely stripped-down toy system. > > That might be wrong, I know. I hope Gordon knows it might be wrong as well. > > I think what Gordon might be trying to say is that the brain is not a *digital* computer. Digital computers separate data and program. The brain is more like an analogue computer. It is not like a digital computer that runs a program stored in memory. The brain *is* the program and *is* the computer. And it is a constantly changing analogue computer as it grows new paths and links. There are no brain programs that resemble computer programs stored in a coded format since all the programming and all the data is built into neuronal networks. If you want to get really complicated, you can think of the brain as multiple analogue computers running in parallel, processing different functions, all growing and changing and passing signals between themselves. This modular parallel design is what causes 'consciousness' to be generated as a sort of synthesis product of all the lower-level modules. The digital computers we have today may not be able to do this. We may need a new generation of a different kind of computer to generate this 'consciousness'. It is a different question whether we need this 'consciousness' in our intelligent computers. BillK From spike66 at att.net Sat Jan 9 21:49:48 2010 From: spike66 at att.net (spike) Date: Sat, 9 Jan 2010 13:49:48 -0800 Subject: [ExI] happy memories and rising tides Message-ID: I had a number of happy memories rush back today, since I was in downtown San Jose at the McHenry Convention Center at a car show. I remembered an extro schmooze we had down there, or right next to it at Hilton if I recall correctly. It must have been in the summer of 01, because I do recall several of us commenting at lunch how we missed Sasha Chislenko and how sad it was he was no longer with us. Is it not amazing he has been gone nearly ten years already? In contrast to that, I have an optimistic note to sound in this post. I moved to the San Jose area about 21 years ago. At that time, the area of town where the McHenry Center now stands is one where you wouldn't hang out, day or night. A mangy junkyard dog would watch his step down there. But by 2001, a revival had been taking place. There were plenty of safer looking areas where one could walk, especially if they were to hang out in clots of 4 or 5 geeks. The neighborhood was a bit spotty, but not bad really. Those who were there, comments welcome, affirming or contradicting. I do recall commenting at the time to stay on the north side of the freeway, for that adjoining neighborhood just two minutes walk underneath that underpass was bad news indeed. But as I recall, the extropians found adequate sustenance and I do not recall anyone having felt threatened. We did suggest people stay indoors after dark if staying at the Hilton. So I was down there this morning at the car show. I marvelled at how nice everything was down there, clean, new, fixed up, nice, nothing at all that looked the least bit dangerous. So I walked around, and found it the same all around there, much better even than it was in 2001. So I decided to risk disappointment and check out on the other side of that freeway. I wasn't disappointed at all! That area is looking waaay better than I ever recall seeing it. Of course it is still lower end housing, lots of ancient dwellings, some fifty or more years old, still standing by some mysterious means after all these years. Perhaps they were built by the same guys who built the Anastasi cliff dwellings. But they were tidy and making an attempt at actual lawns and gardens in many of the houses. There was nothing over there that looked scary. Really! This was a great contrast to the way it appeared 20 years ago. In those days, if one should stumble into that area, one's chances of escaping alive were negligible. One might as well not even bother trying, but rather just save everyone a lot of time and effort, and just pull into the local mortuary, pick out one's favorite pine box, hand them a credit card, climb in and close the lid after oneself. I myself blundered into the area once, only to escape by sheer miracle. But it isn't that way now. I saw little that scared me at all. The rising tide has raised all boats in San Jose. I miss Sasha. May his memory live on forever. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 9 21:48:13 2010 From: jonkc at bellsouth.net (john clark) Date: Sat, 9 Jan 2010 13:48:13 -0800 (PST) Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B48D71A.6010806@satx.rr.com> Message-ID: <630266.35889.qm@web180203.mail.gq1.yahoo.com> On Sat, 1/9/10, Damien Broderick wrote: > What are your equivalent credentials, John? ? Irrelevant, I have not presented experimental results that you are supposed to believe. ? ? John?K Clark? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Sat Jan 9 21:47:43 2010 From: moulton at moulton.com (moulton at moulton.com) Date: 9 Jan 2010 21:47:43 -0000 Subject: [ExI] Avatar: misanthropy in three dimensions Message-ID: <20100109214743.12401.qmail@moulton.com> On Sat, 2010-01-09 at 12:22 -0600, Max More wrote: > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ To me the review is fundamentally confused. First the review mistakenly over identifies some of the humans in the movie with humanity as a whole. Second the reviewer seems to be either dishonest or ignorant when writing: First, the miners and their mercenaries embark upon genocide with no thought whatsoever, despite the fact that humanity has considered genocidal behaviour to be a bad thing for some time now. This allows Avatar to imply that man has not changed since the explorations and conquests of the Middle Ages. Yes much of humanity has come to consider genocidal behavious to be a "bad thing" however this consideration is not universal. Remember that in recent history there were humans who engaged in genocide in the heart of civilized Europe just about seven decades ago. And if the reviewer replies "oh that was long ago" then I suggest the reviewer consider Darfur. And further it is simply false to say that the movie Avatar implies "man has not changed since the explorations and conquests of the Middle Ages." It appears to me that the reviewer had an ax to grind and let ideological fervor overwhelm accuracy and coherency. For a humorous spoof I suggest at quick glance: http://images.huffingtonpost.com/gen/130283/original.jpg > -- Comments from anyone who has seen the movie? (I haven't yet.) Yes I saw the movie and as others have commented it is visually stunning. The 3-D is well done and not used gratuitously. The animation and effects are well integrated. If only they had devoted 10% of the visual budget on a decent script and if they had avoided some poorly done gimmicks in an attempt to do quick character development. I cringed at the cigarette scene at the beginning of the movie. It was so amateurish. The big battle scene was almost a self parody. One way to get through the movie is to every five minutes predict in you mind the next five minutes of the movie which is not difficult given how formulaic the movie is. After the movie one member of the group I was with made the comment after listing to various criticisms: "Remember the target audience is twelve years olds". So I suggest seeing it in 3D for the visuals but do not expect much from the story. I hope this comments are helpful. Fred From gts_2000 at yahoo.com Sat Jan 9 22:14:55 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 9 Jan 2010 14:14:55 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4B48E373.9030607@satx.rr.com> Message-ID: <483875.29296.qm@web36507.mail.mud.yahoo.com> --- On Sat, 1/9/10, Damien Broderick wrote: >> how can I say this without assigning some kind of >> strange non-computable aspect to natural brains. >> The answer is that I say it because I don't believe >> the brain is actually a computer. > > Isn't that exactly saying that you assign some kind of > non-computable aspect to natural brains? No, I think consciousness will likely turn out to be just a state that natural brains can enter not unlike water can enter a state of solidity. Nothing strange or dualistic or non-physical or non-computable about it! But the computer simulation of it won't have consciousness any more than will a simulation of an ice cube have coldness. Computer simulations of things do not equal the things they simulate. (I wish I had a nickel for every time I've said that here :-) A computer simulation of a brain *would* however equal a brain in the special case that natural brains do in fact exist as computers. However real brains have semantics and it looks to me like real computers do not and cannot, so I do not equate natural brains with computers. The computationalist theory of mind seems like a nifty idea, but I think it does not compute. -gts From stefano.vaj at gmail.com Sat Jan 9 22:21:51 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 9 Jan 2010 23:21:51 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> Message-ID: <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> 2010/1/9 Stathis Papaioannou : > I agree with you, of course. The zombie neurons are just as good as > real neurons in every respect, so there is really no basis in > distinguishing them as zombie neurons. But this needs to be carefully > explained; if it were obvious, Gordon would not still be arguing. I suspect that Gordon is simply a dualist, with emotional reasons to think that conscience must be something "special" and different from the mere phenomena it is used to describe. This is not a rare attitude, including amongst people with no overtly metaphysical penchant, but I think that no amount of science can make them to change their views, which are fundamentally philosophical in nature. More interesting, if one is not willling to go down this way, I think one has ultimately to recognise that "conscience" is ultimately a (rather elusive) social construct, and that organic brains have not really much to do with it one way or another. Meaning that nothing they are neither necessary nor sufficient to exhibit the behaviours required to allow us to engage in processes of projection and identification. -- Stefano Vaj From spike66 at att.net Sat Jan 9 22:38:11 2010 From: spike66 at att.net (spike) Date: Sat, 9 Jan 2010 14:38:11 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <20100109214743.12401.qmail@moulton.com> References: <20100109214743.12401.qmail@moulton.com> Message-ID: > On Behalf Of moulton at moulton.com > ... > "Remember the target audience is twelve years olds". ... > > I hope this comments are helpful... Fred Very much so, thanks Fred. Today's 12 yr olds have had avatar based video games their entire lives. I didn't play one until 1986. Fred you and I would have been in our mid 20s by then. spike From thespike at satx.rr.com Sat Jan 9 22:41:55 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 16:41:55 -0600 Subject: [ExI] conscience In-Reply-To: <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> Message-ID: <4B490633.8070900@satx.rr.com> On 1/9/2010 4:21 PM, Stefano Vaj wrote: > More interesting, if one is not willling to go down this way, I think > one has ultimately to recognise that "conscience" is ultimately a > (rather elusive) social construct, and that organic brains have not > really much to do with it one way or another. This is a curious lexical error that non-English writers often make in English, so I assume there must be only a single word in their languages for the two very different concepts "conscience" ("the inner sense of what is right or wrong in one's conduct or motives, impelling one toward right action") and "consciousness" ( "the state of being conscious; awareness of one's own existence, sensations, thoughts, surroundings, etc."). I dimly recall that this is so in French. If so, how do you convey the difference in Italian, etc? Damien Broderick From scerir at libero.it Sat Jan 9 22:53:48 2010 From: scerir at libero.it (scerir) Date: Sat, 9 Jan 2010 23:53:48 +0100 (CET) Subject: [ExI] R: conscience Message-ID: <20823311.379841263077628333.JavaMail.defaultUser@defaultHost> in all conscience these are beautiful pictures http://hirise.lpl.arizona.edu/katalogos.php From pharos at gmail.com Sat Jan 9 23:12:36 2010 From: pharos at gmail.com (BillK) Date: Sat, 9 Jan 2010 23:12:36 +0000 Subject: [ExI] conscience In-Reply-To: <4B490633.8070900@satx.rr.com> References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <4B490633.8070900@satx.rr.com> Message-ID: On 1/9/10, Damien Broderick wrote: > This is a curious lexical error that non-English writers often make in > English, so I assume there must be only a single word in their languages for > the two very different concepts "conscience" ("the inner sense of what is > right or wrong in one's conduct or motives, impelling one toward right > action") and "consciousness" ( "the state of being conscious; awareness of > one's own existence, sensations, thoughts, surroundings, etc."). I dimly > recall that this is so in French. If so, how do you convey the difference in > Italian, etc? > > You are correct that the same word 'conscience' has multiple meanings in French. See: Quote: Il est important de distinguer : * La conscience en tant que ph?nom?ne mental li? ? la perception et la manipulation intentionnelle de repr?sentations mentales, qui comprend : 1. la conscience du monde qui est en relation avec la perception du monde ext?rieur, des ?tres vivants dou?s ou non de conscience dans l?environnement et dans la soci?t? (autrui). 2. la conscience de soi et de ce qui se passe dans l?esprit d?un individu : perceptions internes (corps propre), aspects de sa personnalit? et de ses actes (identit? du soi, op?rations cognitives, attitudes propositionnelles). * La conscience morale, respect de r?gles d'?thique. Le terme conscience est donc susceptible de prendre plusieurs significations, selon le contexte. Translation: It is important to distinguish: * Consciousness as a mental phenomenon linked to the perception and the deliberate manipulation of mental representations, which includes: 1. consciousness of the world that is related to the perception of the world outside, living beings endowed with conscience or not in the environment and the society (others). 2. self-consciousness and what happens in the mind of an individual: perceptions of internal (own body), aspects of his personality and his deeds (identity of the self, operations cognitive, propositional attitudes). * Consciousness morality, respect for rules of ethics. The term consciousness is likely to take several meanings depending on context. ------------------- So the French can make the distinction by saying 'La conscience morale' when they mean 'conscience'. BillK From thespike at satx.rr.com Sat Jan 9 23:27:30 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 17:27:30 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <630266.35889.qm@web180203.mail.gq1.yahoo.com> References: <630266.35889.qm@web180203.mail.gq1.yahoo.com> Message-ID: <4B4910E2.7060300@satx.rr.com> On 1/9/2010 3:48 PM, john clark wrote: > > What are your equivalent credentials, John? > Irrelevant, I have not presented experimental results that you are > supposed to believe. Okay. As my dear old departed Mum used to say, "A cat may look at a king." But of course you snipped out the real point of that post, which was Dr. Sheldrake's credentials, very far from the straw yokel you always push at us. You will probably reply (if you've bothered looking Sheldrake up): "Oh that idiot--he believes all sort of mad BULLSHIT, and look, Nature's editor said his first book should be burned!" My response: I also find some of Sheldrake's theories over the top or silly, but (1) that has nothing to do with his experiments, and (2) he seems to have been driven toward them by experimental results that standard science hasn't accounted for, or just denies. So we're back in the trap I mentioned before: if a scientist with a solid background speaks up for psi, it *means* he's a lunatic/gullible/lying etc, so you don't need to consider anything further that he says. Meanwhile, your list of drooling backwoods cretins includes: Dr. Edwin May, PhD in experimental nuclear physics at UC Davis, long-time scientific director of the long-classified Star Gate program funded for some 20 years *on an annual basis, requiring scientific board oversight and approval* by its government sponsors. Dr. May is a friend of mine; I have read much of his work since it was declassified, and I trust him. Dr. Dean Radin, masters in electrical engineering and a PhD in psychology from the University of Illinois, Champaign-Urbana. For a decade he worked on advanced telecommunications R&D at AT&T Bell Laboratories and GTE Laboratories. Professor Robert Jahn, former Dean of the department of Mechanical and Aerospace Engineering, School of Engineering/Applied Science, Princeton University. Dr. Roger Nelson, PhD in experimental cognitive psychology, long-time member of the Princeton University PEAR team. Dr. Stanley Krippner, B.S University of Wisconsin, Madison, WI, MA and PhD Northwestern University, Evanston, IL. I could go on at some length. Most parapsychologists these days have their work card from the academy. So why aren't their papers published in Nature and Science? Suppose stem cell papers were routinely sent for review to Jesuits at the God Hates Abortion Institute at Notre Dame, or to the Tribophysics Dept at Columbia. Why, the referees mutter, this is BULLSHIT or wicked, or worse, you think we're going to waste our time on such nonsense? Reject! (I've read some such referee reports, such as one from Science; they are shamefully empty of critique.) Btw, how many papers in perceptual psychology are published in Nature? I don't know, maybe quite a few. Neuroscience might allow psi in, but it'd be a squeeze. My impression is that Nature tends to focus on physics, cosmology, cell biology, genetics, genomics, etc.** Damien Broderick **eg: # Nature # Nature Biotechnology # Nature Cell Biology # Nature Chemical Biology # Nature Chemistry # Nature Clinical Practice Journals # Nature Communications # Nature Digest # Nature Genetics # Nature Geoscience # Nature Immunology # Nature Materials # Nature Medicine # Nature Methods # Nature Nanotechnology # Nature Neuroscience # Nature Photonics # Nature Physics # Nature Protocols # Nature research journals # Nature Reviews journals # Nature Reviews Cancer # Nature Reviews Cardiology (formerly Nature Clinical Practice Cardiovascular Medicine) # Nature Reviews Clinical Oncology (formerly Nature Clinical Practice Oncology) # Nature Reviews Drug Discovery # Nature Reviews Endocrinology (formerly Nature Clinical Practice Endocrinology & Metabolism) # Nature Reviews Gastroenterology and Hepatology (formerly Nature Clinical Practice Gastroenterology and Hepatology) # Nature Reviews Genetics # Nature Reviews Immunology # Nature Reviews Microbiology # Nature Reviews Molecular Cell Biology # Nature Reviews Nephrology (formerly Nature Clinical Practice Nephrology) # Nature Reviews Neurology (formerly Nature Clinical Practice Neurology) # Nature Reviews Neuroscience # Nature Reviews Rheumatology (formerly Nature Clinical Practice Rheumatology) # Nature Reviews Urology (formerly Nature Clinical Practice Urology) # Nature Structural and Molecular Biology # Neuropsychopharmacology From stathisp at gmail.com Sat Jan 9 23:38:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 10:38:38 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/10 Gordon Swobe : > Human operators ascribe meanings to the symbols their computers manipulate. Sometimes humans forget this and pretend that the computers actually understand the meanings. > > It's an understandable mistake; after all it sure *looks* like computers understand the meanings. But then that's what programmers do for a living: we program dumb machines to make them look like they have understanding. > > The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" > > And Searle's answer is: "It won't happen from running formal syntactical programs on hardware as we do today, because computers and their programs cannot and will never get semantics from syntax. We'll need to find another way. And we know we can do it even if we don't yet know the way. After all nature did it in these machines we call humans." The meaning of the symbols in a computer program is arbitrary, assigned by the programmer or by the context if the computer is learning from the environment. But where does the meaning of symbols in brains come from? A child is told "dog" and shown a picture of a dog, so "dog" comes to mean dog. It's not as if "dog" has some God-given, absolute meaning which only brains can access. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 9 23:41:06 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 10:41:06 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> Message-ID: 2010/1/10 Mike Dougherty : > How can we make people actually understand the meanings and not merely > appear to understand the meanings? More to the point, what is the difference between real understanding and pseudo-understanding? If I can use a word appropriately in every context, then ipso facto I understand that word. -- Stathis Papaioannou From pharos at gmail.com Sat Jan 9 23:59:05 2010 From: pharos at gmail.com (BillK) Date: Sat, 9 Jan 2010 23:59:05 +0000 Subject: [ExI] conscience In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <4B490633.8070900@satx.rr.com> Message-ID: On 1/9/10, BillK wrote: > You are correct that the same word 'conscience' has multiple meanings > in French. > See: > > And a quick look in the Italian wikipedia shows that the same range of multiple meanings for 'coscienza' exists in Italian as well. I don't speak Italian, but it seems reasonable that Italians could say 'coscienza morale' when they mean conscience. BillK From lcorbin at rawbw.com Sun Jan 10 00:02:02 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 09 Jan 2010 16:02:02 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> Message-ID: <4B4918FA.8000708@rawbw.com> Emlyn writes > The plot itself, as many have said, is Pocahontas / Dances with Wolves > / The Last Samurai, etc, except that the native people win in the end. > In fact, I'd say the biggest foil to that reviewer's complaint about > the misanthropy is that you can't really believe the ending, because > we know from our history that it just doesn't work that way Yes. The ending was so "unbelievable" (in a thinking man's sense) that I never gave it a second thought. > - I imagined them being nuked from orbit 5 minutes after > the end of the film. Well, you sound as anti-human as the reviewer Steve Bremner accused Cameron of being. What you write is not at all how it would end, any more than the movie's version. It would really end with the corporation coming back to Pandora and "making it right" with whoever the leaders are. I.e., cutting them in on the action. That's of course how Chief Seattle got what was coming to him. Not in the made-up account, as exemplified by, in 1971, a totally fabricated speech by the Chief. See http://www.snopes.com/quotes/seattle.asp But you have to look hard on Google to find the truth that the chief's speech was made up.) Another thing so obvious in the movie that I didn't give it a second thought is that while the Pandorans had a great deal to give humans about biological knowledge (me, I'd estimate their flora and fauna to be about 500 million years more advanced than our Earth flora and fauna), the humans, on the other hand, clearly had a great deal of technology to share with the Pandorans. Thus a deal could be struck. Thus a deal *would* be struck... if you know anything at all about the history of trade. (I must add that Bernstein's new book "A Splendid Exchange" is the most limitlessly fascinating and informative book I can presently imagine about trade, and its impact on history.) > But generally it's a bit embarrassing to be overly offended or > enthused by this story. It's just not got enough substance for that. > Complain about the lack of sophistication (Movie in 3D, story in 2D), > but the politics? Really? I don't know. People growing up today, unless they're of an especially thoughtful variety, are surely being overwhelmed with all these anti-tech and anti-progress memes, just as the reviewer says. The media and the left-culture has already turned two generations of people against corporations (and by default therefore towards pro-government solutions), so what else is new. Yes, I'd agree, however, that the politics is not the first thing that strikes one about the movie. Lee From thespike at satx.rr.com Sun Jan 10 00:04:54 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 09 Jan 2010 18:04:54 -0600 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> Message-ID: <4B4919A6.6050600@satx.rr.com> On 1/9/2010 5:41 PM, Stathis Papaioannou wrote: > More to the point, what is the difference between real understanding > and pseudo-understanding? If I can use a word appropriately in every > context, then ipso facto I understand that word. Not relevant. What you can do is exactly beside the point when discussing what robot systems can do. A good Google translation now can cough up a reliable translation from Hungarian (I know, I used it the other day to turn part of one of my human-translated papers back into English). It would be perverse to claim that the Google system understood the words being translated, even though the complex program was able to find appropriate English words and syntax. I understood it, the machine didn't. Damien Broderick From lcorbin at rawbw.com Sun Jan 10 00:06:00 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 09 Jan 2010 16:06:00 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions Message-ID: <4B4919E8.6070104@rawbw.com> Max More wrote: > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) That reviewer is more like me than I am! I thought I was "way out there" in siding with the Japanese bureaucracy and modern gatling-gun toting imperial forces against the unruly, backward, primitive, and fighting-is-all-they-know samurai, in the movie "The Last Samurai". But I humbly bow to the reviewer *Steve Bremner* who wrote about Avatar: "By the end of the film, this reviewer felt like rising to his feet and cheering the final human attack on the Na?vi. Indeed, much of the audience seemed ambivalent - we were clearly dazzled by the spectacular 3D effects and the beautiful rendering of the alien planet, but the unrelentingly bleak portrayal of humanity left everyone more than a little despondent as we left the cinema to celebrate the New Year." Wow. Incredible. Even *I* was on the side of the Navi by then. Perhaps my relative lack of offense, compared to good old Steve here, is that I'm probably a lot older than he is, and have just been resigned for far more decades to the inevitable depiction in Hollywood movies of modernity as evil, and the glorification of the noble savage. I liked the film very much, notwithstanding that every word Steve writes is true, absolutely true (so to speak). People like Cameron are nothing short of hypocrites, if you have the sense to follow Steve Bremner's clear insights and conclusions. Well... high time I read all the rest of those posts following Max's and see if I'm just echoing the chorus or not. Lee P.S. The reviewer did not mention that all the male characters of the Navi are *warriors*, and you can't get to be a warrior unless there is a lot of war going on. Again, so much for the glorification of the hunter/gatherer/warrior. From pharos at gmail.com Sun Jan 10 00:12:44 2010 From: pharos at gmail.com (BillK) Date: Sun, 10 Jan 2010 00:12:44 +0000 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <4B4918FA.8000708@rawbw.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> <4B4918FA.8000708@rawbw.com> Message-ID: On 1/10/10, Lee Corbin wrote: > I don't know. People growing up today, unless they're of > an especially thoughtful variety, are surely being overwhelmed > with all these anti-tech and anti-progress memes, just as > the reviewer says. The media and the left-culture has already > turned two generations of people against corporations (and > by default therefore towards pro-government solutions), so > what else is new. Yes, I'd agree, however, that the politics > is not the first thing that strikes one about the movie. > > Nit-pick. Yes, people have been turned against corporations and more to pro-government solutions. But the problem in the US is that the corporations have become the government. So the mass of the US people who are rapidly becoming the poor and / or unemployed have to find a different government to be pro-. BillK From stathisp at gmail.com Sun Jan 10 01:16:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 12:16:12 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <606713.51772.qm@web36508.mail.mud.yahoo.com> References: <606713.51772.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/10 Gordon Swobe : >> The patient was not a zombie before the operation, since >> most of his brain was functioning normally, so why would he be a zombie >> after? > > To believe something one must have an understanding of the meaning of the thing believed in, and I have assumed from the beginning of our experiment that the patient presents with no understanding of words, i.e., with complete receptive aphasia from a broken Wernicke's. I don't believe p-neurons will cure his aphasia subjectively, but I think his surgeon will eventually succeed in programming him to behave outwardly like one who understands words. > > After leaving the hospital, the patient might tell you he believes in Santa Claus, but he won't actually "believe" in it; that is, he won't have a conscious subjective understanding of the meaning of "Santa Claus". He has no understanding of words before the operation, but he still has understanding! If he sees a dog he knows it's a dog, he knows if it's a friendly dog or a vicious dog to be avoided, he knows that dogs have to eat and how to open a can of dog food, and so on - even though the word "dog" is incomprehensible to him. After the operation, whether it's Cram with the p-neurons or Sam with the c-neurons, when he hears the word "dog" he will get an image of a dog in his head, and he will think, "that must be what people meant when they were making sounds and pointing to a dog before!" If he is asked "how many legs does a dog which has lost one of its legs have?" he will get an image of a dog hobbling about on three legs and answer, "three"; and he will remember when he was a child and his own dog was run over by a car and lost one of its legs. So his behaviour in relation to language will be exactly the same whether he got the p-neurons or the c-neurons, and his cognitions, feelings, beliefs and understanding at least in the normal part of his brain will also be the same in either case. But you claim that Cram will actually have no understanding of "dog" despite all this. That is what seems absurd: what else could it possibly mean to understand a word if not to use the word appropriately and believe you know the meaning of the word? That's all you or I can claim at the moment; how do we know we don't have a zombified language centre? >> Before the operation he sees that people don't understand >> him when he speaks, and that he doesn't understand them when they >> speak. He hears the sounds they make, but it seems like gibberish, making >> him frustrated. After the operation, whether he gets the >> p-neurons or the c-neurons, he speaks normally, he seems to understand >> things normally, and he believes that the operation is a success as he >> remembers his difficulties before and now sees that he doesn't have >> them. > > Perhaps he no longer feels frustrated but still he has no idea what he's talking about! He only *thinks* he knows what he is talking about and *behaves* as if he knows what he is talking about. >> Perhaps you see the problem I am getting at and you are >> trying to get around it by saying that Cram would become a zombie. > > I have only this question unanswered in my mind: "How much more complete of a zombie does Cram become as a result of the surgeon's long and tedious process of reprogramming his brain to make him seem to function normally despite his inability to experience understanding? When the surgeon finally finishes with him such that he passes the Turing test, will the patient even know of his own existence?" Why do you think the surgeon needs to do anything to the rest of his brain? The p-neurons by definition accept input from the auditory cortex, process it and send output to the rest of the brain exactly the same as the c-neurons do. That's their one and only task, and the surgeon's task is to install them in the right place causing as little damage to the rest of the brain as possible. And if the p-neurons duplicate the I/O behaviour of c-neurons, the behaviour of the rest of the brain and the person as a whole must be the same. It must! Are you still trying to say that the p-neurons *won't* be able to duplicate the I/O behaviour of the c-neurons due to lacking understanding? Then you have to say that p-neurons (zombie or weak AI neurons) are impossible, that there is something non-algorithmic about the behaviour of neurons. But you seem very reluctant to agree to this. Instead, you put yourself in a position where you have to say that Cram lacks understanding, but behaves as if he has understanding and believes that he has understanding; in which case, we could all be Cram and not know it. -- Stathis Papaioannou From lcorbin at rawbw.com Sun Jan 10 01:17:55 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 09 Jan 2010 17:17:55 -0800 Subject: [ExI] Corporate Misbehavior (was Avatar: misanthropy in three dimensions) In-Reply-To: References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <710b78fc1001091251s290909b5ubcff45bd98e9075e@mail.gmail.com> <4B4918FA.8000708@rawbw.com> Message-ID: <4B492AC3.4030208@rawbw.com> BillK writes > Yes, people have been turned against corporations and more > to pro-government solutions. But the problem in the US is that the > corporations have become the government. I quite agree! I didn't mean to imply that corporations are run by public spirited saints. On the contrary, like in politics, the systems evolve people into positions of power who are not exactly like your next door neighbor. But if restrained by competition, corporations are to a very great extent working on behalf of the public, just as Adam Smith explained was true of candlestick makers. But not that they, the corporations, like it one bit. J. P. Morgan constantly complained about "ruinous competition", and was largely successful in taming it, partly through mergers and monopolies, but most effectively in promoting government regulation. Do you think that the airlines, for example, *like* being deregulated? Nothing warms the corporate heart as much, or puts as much money in its pockets, as cozy regulation by sympathetic government types. Government is corporations' chief instrument of exercising power, these days. And unfortunately, most people totally miss the point when they call for more government regulation! Some 70,000 pages of new regulations are inflicted on the public each year by congress here in the USA. Oops. Did I say "congress"? Actually, far from it. If it were congress, then that at least would be somewhat constitutional. Instead, the regulatory agents are who regulate American life, as a part of an evil entity we call "the Administration". And they work hand in hand with corporations to suppress competition, and especially the easy entry into the market of those who would challenge monopoly. > So the mass of the US people who are rapidly becoming > the poor and / or unemployed have to find a different > government to be pro-. Again, I totally agree! The present western governments need to be slowly disbanded, agency by agency. It will be painful (so painful that, of course, it will never happen), but the alternative is slow death by regulation. Which will happen. Lee From stathisp at gmail.com Sun Jan 10 01:29:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 12:29:51 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <483875.29296.qm@web36507.mail.mud.yahoo.com> References: <4B48E373.9030607@satx.rr.com> <483875.29296.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/1/10 Gordon Swobe : > But the computer simulation of it won't have consciousness any more than will a simulation of an ice cube have coldness. Computer simulations of things do not equal the things they simulate. (I wish I had a nickel for every time I've said that here :-) But the computer simulation can drive a robot to behave like the thing it is simulating, and this robot can be installed in place of part of the brain. The result must be (*must* be; I wish I had 5c for every time I've said that here) that the person with the cyborgised brain behaves normally and believes that everything is normal. So either you must allow that it is coherent to speak of a pseudo-understanding which is subjectively and objectively indistinguishable from true understanding, or you must admit that the original premise, that the robot part lacks understanding, is false. The only other way out is to deny that it is possible to make such robot parts because there is something about brain physics which is not computable. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 10 01:53:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 12:53:27 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <855333.87547.qm@web36505.mail.mud.yahoo.com> <4B48E373.9030607@satx.rr.com> Message-ID: 2010/1/10 BillK : > I think what Gordon might be trying to say is that the brain is not a > *digital* computer. > > Digital computers separate data and program. > > The brain is more like an analogue computer. It is not like a digital > computer that runs a program stored in memory. The brain *is* the > program and *is* the computer. And it is a constantly changing > analogue computer as it grows new paths and links. There are no brain > programs that resemble computer programs stored in a coded format > since all the programming and all the data is built into neuronal > networks. > > If you want to get really complicated, you can think of the brain as > multiple analogue computers running in parallel, processing different > functions, all growing and changing and passing signals between > themselves. No-one claims that the brain is a digital computer, but it can be simulated by a digital computer. The ideal analogue computer cannot be emulated by a digital computer because it can use actual real numbers. However, the real world appears to be quantised rather than continuous, so actual analogue computers do not use real numbers. And even if the world turned out to be continuous factors such as thermal noise would make all the decimal places after the first few in any parameter irrelevant, so there would be no need to use infinite precision arithmetic to simulate an analogue device. -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 10 02:06:52 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 13:06:52 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <4B4919A6.6050600@satx.rr.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> <62c14241001091211p4bf39b96t6ce9ba61b2c8b079@mail.gmail.com> <4B4919A6.6050600@satx.rr.com> Message-ID: 2010/1/10 Damien Broderick : > On 1/9/2010 5:41 PM, Stathis Papaioannou wrote: >> >> More to the point, what is the difference between real understanding >> and pseudo-understanding? If I can use a word appropriately in every >> context, then ipso facto I understand that word. > > Not relevant. What you can do is exactly beside the point when discussing > what robot systems can do. A good Google translation now can cough up a > reliable translation from Hungarian (I know, I used it the other day to turn > part of one of my human-translated papers back into English). It would be > perverse to claim that the Google system understood the words being > translated, even though the complex program was able to find appropriate > English words and syntax. I understood it, the machine didn't. I specified "use a word appropriately in every context"; Google can't as yet do that. It is possible for a human to translate one language into another language using a dictionary despite understanding neither language. In order to understand it he has to have another dictionary so he can associate words in the unknown language with words in a language he does know, and in turn he associates words in the known language with objects in the real world. The objects in the real world are themselves only known through sense data, which is basically just more symbols, not the object itself. So it's syntactical relationships all the way down. What else could understanding possibly be? -- Stathis Papaioannou From stathisp at gmail.com Sun Jan 10 02:46:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jan 2010 13:46:08 +1100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: 2010/1/10 Max More : > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) I haven't seen the film, but in general I think it's a good thing if the more powerful party consider that they may be unfair in their dealings with the the less powerful party, even if it isn't true. When the aliens arrive I would be happier if they are benevolent and self-doubting rather than benevolent and completely confident that they are doing the right thing. -- Stathis Papaioannou From moulton at moulton.com Sun Jan 10 04:33:07 2010 From: moulton at moulton.com (moulton at moulton.com) Date: 10 Jan 2010 04:33:07 -0000 Subject: [ExI] Avatar: misanthropy in three dimensions Message-ID: <20100110043307.96499.qmail@moulton.com> On Sat, 2010-01-09 at 14:38 -0800, spike wrote: > Very much so, thanks Fred. Today's 12 yr olds have had avatar based video > games their entire lives. I didn't play one until 1986. Fred you and I > would have been in our mid 20s by then. Well to be honest in 1986 I was well beyond my mid 20s. Your comment brings to mind something a friend told me recently about teaching the son to drive about 5 years ago. It was much easier and the son learned more quickly than his sister who was about 4 years older than him. When the parent remarked to the son about how much easier it was to teach him than the older sister the son replied it was because he had played so many video games including some which involved driving virtual vehicles. It would be an interesting study to see if those who learn to drive more easily due to playing video games have a higher or lower accident rate. Fred From ddraig at gmail.com Sun Jan 10 06:12:05 2010 From: ddraig at gmail.com (ddraig) Date: Sun, 10 Jan 2010 17:12:05 +1100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <20100110043307.96499.qmail@moulton.com> References: <20100110043307.96499.qmail@moulton.com> Message-ID: 2010/1/10 : > It would be an interesting study to see if those who learn to drive more easily > due to playing video games have a higher or lower accident rate. There is also a study showing that surgeons who are frequent gamers are also better at microsurgery, due to the improved hand-eye coordination gaming gives them Dwayne -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From bbenzai at yahoo.com Sun Jan 10 13:31:13 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 10 Jan 2010 05:31:13 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <718501.52709.qm@web113618.mail.gq1.yahoo.com> > Damien Broderick wrote: > > On 1/9/2010 5:41 PM, Stathis Papaioannou wrote: > > More to the point, what is the difference between real > understanding > > and pseudo-understanding? If I can use a word > appropriately in every > > context, then ipso facto I understand that word. > > Not relevant. What you can do is exactly beside the point > when > discussing what robot systems can do. A good Google > translation now can > cough up a reliable translation from Hungarian (I know, I > used it the > other day to turn part of one of my human-translated papers > back into > English). It would be perverse to claim that the Google > system > understood the words being translated, even though the > complex program > was able to find appropriate English words and syntax. I > understood it, > the machine didn't. Google isn't a robot. What a human can do *is* relevant to what a robot can do, because they both not only have a brain, but also a body. The word "move" is meaningless to Google because it has no experience of moving, so all it can do is relate it to another word in another language. To a system that does have the means of movement (whether that be via a real-world body or in a simulated environment), it has an experience of what moving is like. That's the 'symbol grounding' it needs to make sense of the word. It now has a meaning. "Using the word appropriately in every context" means that if you say "Could you move 2 metres to your left?" the system will be able to answer yes or no, and do it or not, depending on it's physical state and environment. Moving 2 metres to the left is meaningless to Google, because Google doesn't have legs (or wheels, etc.). If you hooked Google up to a robotic (or virtual) body, and gave it the means to sense the environment, and move the body, and hooked up words to actions, then it would be capable of understanding (assigning meaning to) the words, because they would now have a context. Ben Zaiboc From stathisp at gmail.com Sun Jan 10 14:46:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 01:46:22 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <718501.52709.qm@web113618.mail.gq1.yahoo.com> References: <718501.52709.qm@web113618.mail.gq1.yahoo.com> Message-ID: 2010/1/11 Ben Zaiboc : > Google isn't a robot. ?What a human can do *is* relevant to what a robot can do, because they both not only have a brain, but also a body. The word "move" is meaningless to Google because it has no experience of moving, so all it can do is relate it to another word in another language. > > To a system that does have the means of movement (whether that be via a real-world body or in a simulated environment), it has an experience of what moving is like. ?That's the 'symbol grounding' it needs to make sense of the word. ?It now has a meaning. > > "Using the word appropriately in every context" means that if you say "Could you move 2 metres to your left?" the system will be able to answer yes or no, and do it or not, depending on it's physical state and environment. ?Moving 2 metres to the left is meaningless to Google, because Google doesn't have legs (or wheels, etc.). > > If you hooked Google up to a robotic (or virtual) body, and gave it the means to sense the environment, and move the body, and hooked up words to actions, then it would be capable of understanding (assigning meaning to) the words, because they would now have a context. It gets a bit tricky when you talk about a virtual body in a virtual environment. There may be a mapping between what happens in the computer when it follows an instruction to move two metres to the left and moving two metres to the left in the real world, but there is no basis for saying that this is what the symbols in the computer "mean", since there are also other possible mappings. -- Stathis Papaioannou From gts_2000 at yahoo.com Sun Jan 10 15:05:54 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 07:05:54 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <53116.2093.qm@web36506.mail.mud.yahoo.com> --- On Sat, 1/9/10, Stathis Papaioannou wrote: >> After leaving the hospital, the patient might tell you > he believes in Santa Claus, but he won't actually "believe" > in it; that is, he won't have a conscious subjective > understanding of the meaning of "Santa Claus". > > He has no understanding of words before the operation, but > he still has understanding! If he sees a dog he knows it's a dog, To think coherently about dogs or about anything else, one must understand words and this poor fellow cannot understand his own spoken or unspoken words or the words of others. At all. He completely lacks understanding of words, Stathis. Suffering from complete receptive aphasia, he has no coherent thoughts whatsoever. We can suppose less serious aphasias if you like, but to keep our experiment pure I have assumed complete receptive aphasia. With b-neurons or possibly with m-neurons we can cure him. We p-neurons we can only program him to speak and behave in a way that objective observers will find acceptable, i.e., we can program him to pass the Turing test. > But you claim that Cram will actually have no understanding of > "dog" despite all this. That is what seems absurd: what else could it > possibly mean to understand a word if not to use the word appropriately > and believe you know the meaning of the word? Although Cram uses the word "dog" appropriately after the operation, he won't believe he knows the meaning of the word, i.e., he will not understand the word "dog". If that seems absurd to you, remember that he did not understand it before the operation either. In this respect nothing has changed. -gts From jonkc at bellsouth.net Sun Jan 10 16:03:48 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 11:03:48 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays) In-Reply-To: <201001080840.o088ec6A009055@andromeda.ziaspace.com> References: <201001080840.o088ec6A009055@andromeda.ziaspace.com> Message-ID: <71CD007B-C594-4E9A-A99C-07F6E18F009E@bellsouth.net> On Jan 8, 2010, Max More wrote: > it would shake up physics and expand our horizons and potentially open new avenues to enhancement. So, I have nothing intrinsically against it. I have nothing against Psi either and I wish it were true, I wish cold fusion worked too, but wishing does not make it so. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Jan 10 16:04:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 08:04:49 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> Message-ID: <342642.53797.qm@web36505.mail.mud.yahoo.com> --- On Sat, 1/9/10, Stefano Vaj wrote: > I suspect that Gordon is simply a dualist, with emotional > reasons to think that conscience must be something "special" Not at all Stefano. I consider the world as made of just one kind of stuff, not two or more. In fact I just wrote something yesterday to Damien about how I consider consciousness as a physical state the brain can enter in a manner analogous to that by which water enters a state of solidity. When the brain enters that state, it has a feature we call experience. When it leaves that state, it doesn't. In other words, I think subjective experience exists as part of the same physical world in which we find gum-ball machines, mountains and basketballs. It differs from those things only in that it has a first-person ontology. We can't approach it the same way that we do things with third-person ontologies but this does not in any way make it other-worldly or non-physical. -gts From gts_2000 at yahoo.com Sun Jan 10 15:44:48 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 07:44:48 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <417464.50273.qm@web36502.mail.mud.yahoo.com> --- On Sat, 1/9/10, Stathis Papaioannou wrote: > I specified "use a word appropriately in every context"; > Google can't as yet do that. If and when Google does do that, it will have weak AI or AGI. One might argue that it already has primitive weak AI. Strong AI means something beyond that. It means something beyond merely "using words appropriately in every context". It means "actually knowing the meanings of the words used appropriately". It means "having a mind in the sense that humans have minds, complete with mental contents (semantics)". I can program my computer to answer "Yes" to the question "Do you have something in mind right now?" Will my computer then actually have something in mind when it executes that operation in response to my question? I might find it amusing to imagine so, but I also understand the difference between reality and the things I imagine. -gts From jonkc at bellsouth.net Sun Jan 10 16:17:12 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 11:17:12 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B46C2BB.5000003@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> Message-ID: <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> On Jan 8, 2010, Damien Broderick wrote: > There's no shortage of weird ideas to explain the weird phenomena labeled "psi" There are indeed a lot of explanations of Psi, too many, few of them rational and none of them clear. I think the moral is that before you develop an elaborate theory to explain something make sure that there is an actual phenomena that needs explaining. After well over a century's effort not only have Psi "scientists" failed to explain how it works they haven't even shown that it exists. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Jan 10 16:28:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 08:28:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <514861.16402.qm@web36507.mail.mud.yahoo.com> --- On Sat, 1/9/10, Stathis Papaioannou wrote: > 2010/1/10 BillK : > >> I think what Gordon might be trying to say is that the >> brain is not *digital* computer. Yes Bill, you understand me correctly. Stathis writes: > No-one claims that the brain is a digital computer, but it > can be simulated by a digital computer. If you think simulations of brains on digital computers will have everything real brains have then you must think natural brains work like digital computers. But they don't. -gts From jonkc at bellsouth.net Sun Jan 10 16:32:16 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 11:32:16 -0500 Subject: [ExI] Psi (no need to read this post you already know what it says) In-Reply-To: <19052174.225601262904435226.JavaMail.defaultUser@defaultHost> References: <19052174.225601262904435226.JavaMail.defaultUser@defaultHost> Message-ID: On Jan 7, 2010, scerir wrote: > hey, there are amazing experiments here > http://www.parapsych.org/online_psi_experiments.html > http://www.fourmilab.ch/rpkp/experiments To tell the truth I don't find anything very amazing about them, lots of people know how to type. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Jan 10 16:35:08 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 17:35:08 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <342642.53797.qm@web36505.mail.mud.yahoo.com> References: <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <342642.53797.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21001100835s3f4529f7o1546c3d1ce1c209b@mail.gmail.com> 2010/1/10 Gordon Swobe : > In other words, I think subjective experience exists as part of the same physical world in which we find gum-ball machines, mountains and basketballs. That's crystal clear. I am only saying that the independent "phyisical existence" of something defined as subjective experience, and its hypothetical connection with organic brains, is for you a matter of faith, altogether outside of any kind of scientific proof or disproof. -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 10 16:42:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 17:42:41 +0100 Subject: [ExI] conscience In-Reply-To: <4B490633.8070900@satx.rr.com> References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <4B490633.8070900@satx.rr.com> Message-ID: <580930c21001100842g755ffb05laa684c179791d93a@mail.gmail.com> 2010/1/9 Damien Broderick : > This is a curious lexical error that non-English writers often make in > English, so I assume there must be only a single word in their languages for > the two very different concepts "conscience" ("the inner sense of what is > right or wrong in one's conduct or motives, impelling one toward right > action") and "consciousness" ( "the state of being conscious; awareness of > one's own existence, sensations, thoughts, surroundings, etc."). I dimly > recall that this is so in French. If so, how do you convey the difference in > Italian, etc? Yes, you are absolutely right. I should have known better, but the truth is that one tends to think in its own mother tongue, and in Neolatin languages those are just two meanings of a single word. See, for an opposite example, "umanesimo" (which mostly refers to the Renaissance cultural movement, and more in general to the overcoming/refusal of theocentrism) and "umanismo" (which does not imply any secularism in Italian, and mostly refers to i) humanities as opposed to hard sciences, ii) anti-transhumanism ii) a kind of vague, politically correct "humanitarian" attitude). -- Stefano Vaj From stefano.vaj at gmail.com Sun Jan 10 16:47:53 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 17:47:53 +0100 Subject: [ExI] conscience In-Reply-To: References: <4B42D3AC.6090504@rawbw.com> <580930c21001080807v4a696a0of3b64116caf00dbf@mail.gmail.com> <580930c21001080859h1dec841eu6e8d9a34b28c4751@mail.gmail.com> <580930c21001091421m606015e5qacf5bfbec848bf67@mail.gmail.com> <4B490633.8070900@satx.rr.com> Message-ID: <580930c21001100847t3b529249kc181f18879fff228@mail.gmail.com> 2010/1/10 BillK : > I don't speak Italian, but it seems reasonable that Italians could say > 'coscienza morale' when they mean conscience. Yes. Or, more often, you pick the right meaning from the context (as in "esame di coscienza" before undergoing confession). But we have the distinction between conscious and conscientious (even though the second term refers to diligence and scrupulous more than to morality). -- Stefano Vaj From gts_2000 at yahoo.com Sun Jan 10 16:48:08 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 10 Jan 2010 08:48:08 -0800 (PST) Subject: [ExI] Some new angle about AI In-Reply-To: <580930c21001100835s3f4529f7o1546c3d1ce1c209b@mail.gmail.com> Message-ID: <205764.81556.qm@web36502.mail.mud.yahoo.com> --- On Sun, 1/10/10, Stefano Vaj wrote: >> In other words, I think subjective experience exists >> as part of the same physical world in which we find gum-ball >> machines, mountains and basketballs. > > That's crystal clear. I am only saying that the independent > "phyisical existence" of something defined as subjective experience, > and its hypothetical connection with organic brains, is for you a > matter of faith, altogether outside of any kind of scientific proof > or disproof. A matter of faith? Do you deny the existence of your own experience or its connection with your brain? I assert that 1) I have experience and 2) my experience goes away when someone whacks me in the head with a baseball bat. If you call that a statement of faith then I suppose I'm devout believer. :) -gts From stefano.vaj at gmail.com Sun Jan 10 17:07:27 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 18:07:27 +0100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> 2010/1/9 Max More : > Avatar: misanthropy in three dimensions > http://www.spiked-online.com/index.php/site/earticle/7895/ > > -- Comments from anyone who has seen the movie? (I haven't yet.) Neither have I, and I am quite impatient to get it on 3d blu-ray. I am under the impression that it is quite popular amongst, e.g., my fellow Cosmic Engineers, e.g., but another quite sobering (albeit from a POV rather different from mine...) review can be found here: http://io9.com/5422666/when-will-white-people-stop-making-movies-like-avatar -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 10 17:05:24 2010 From: spike66 at att.net (spike) Date: Sun, 10 Jan 2010 09:05:24 -0800 Subject: [ExI] trained by simulation: was RE: Avatar: misanthropy in three dimensions In-Reply-To: <20100110043307.96499.qmail@moulton.com> References: <20100110043307.96499.qmail@moulton.com> Message-ID: > ...On Behalf Of moulton at moulton.com > Subject: Re: [ExI] Avatar: misanthropy in three dimensions > > > On Sat, 2010-01-09 at 14:38 -0800, spike wrote: > > ...Fred you and I would have been in our mid 20s by then. > > Well to be honest in 1986 I was well beyond my mid 20s... The years have been kind to you. Whatever you are doing, keep it up. > ...teaching the son to drive about 5 years ago. It was > much easier and the son learned more quickly than his sister > ... because he had played so many video games... Fred Same with my own experience with flight simulators. I had the rare opportunity to fly a Pitts Special (aerobatic stunt plane.) From playing with flight simulators, I had a really good intuitive feel for what one can do in such a bird. For instance, the Pitts has a huge long nose sticking way out, and the wing chord is parallel with the fusilage, so to fly straight and level one must fly with the nose up. http://en.wikipedia.org/wiki/Pitts_Special Since the cockpit is so far aft, when in straight and level flight, the pilot cannot see where she is going. So the easiest way to see in those things is to fly upside down. That would bother some pilots, but in the flight simulators, inverted flight is also the best way to do look around, so it seemed natural to me first time I took the controls. Flying upside down is actually more comfortable on the computer I found. And if you push on the stick while inverted, the whole red-out negative G thing hurts. But it seems like we should be able to extend the trained-by-simulator concept beyond having drivers and pilots very skilled before they ever climb behind the wheel or into the cockpit. What else? Surgeons? spike From jonkc at bellsouth.net Sun Jan 10 17:13:10 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 12:13:10 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> On Jan 9, 2010, Gordon Swobe wrote: > Human operators ascribe meanings to the symbols their computers manipulate. Computers ascribe meaning to symbols too, if they didn't they would treat all symbols the same. They don't. And if I had written the quote attributed to Searle where he talks about the meaning of meaningless symbols I would have been deeply embarrassed. > It's an understandable mistake; after all it sure *looks* like computers understand the meanings. And it sure *looks* like humans understand the meanings too, but who knows. > The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" If the computer did understand the meaning you think the machine would continue to operate exactly as it did before, back when it didn't have the slightest understanding of anything. So, given that understanding is a completely useless property why should computer scientists even bother figuring out ways to make a machine understand? Haven't they got anything better to do? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frankmac at ripco.com Sun Jan 10 17:41:45 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Sun, 10 Jan 2010 12:41:45 -0500 Subject: [ExI] Late to the subject Message-ID: <001e01ca921c$33225cb0$ad753644@sx28047db9d36c> The Russian Gov't is preparing to Nuke an astroid that will be visiting the earth's orbit in 2016. The Russians quote an odd of 33 to 1 it will hit this planet. The United States bookmakers quote the odds at 2000 to 1. I will take 2000 to 1, but after the climate change cooking of the books, does the proverb, "fool me once shame on you, fool me twice shame on me" take hold concerning this subject. Oh by the bye when they came up with 2000-1 m, if you add them together "2001 a Space Odessy" comes to mind and thus these odds do not even pass the smell test by my nose, and if so should I begin to root for Moscow to suceed in it's blast in space. Only bring it up because of the Bruce Willis movie which had the same plot from a few years ago. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 10 17:58:32 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 11:58:32 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> Message-ID: <4B4A1548.6050407@satx.rr.com> On 1/10/2010 10:17 AM, John Clark wrote: > I think the moral is that before you develop an elaborate theory to > explain something make sure that there is an actual phenomena that needs > explaining. After well over a century's effort not only have Psi > "scientists" failed to explain how it works they haven't even shown that > it exists. You *don't* know that, because you refuse to look at the published evidence##, because it's not in the Journal of Recondite Physics and Engineering. But I tend to agree that theorizing in advance of empirical evidence is pretty pointless--and yet the usual objection to psi from heavy duty scientists at the Journal of Recondite Physics and Engineering is, "We don't care about your anomalies, because you don't have a *theory* that predicts and explains them." My guess is that at some point a new comprehensive theory of spacetime and symmetry will emerge to account for quantum gravity, say, if the Higgs fails to appear, and one of its elements will be the surprise finding that certain psi functions fall out of the equations. Which is why it makes no sense to bet on when the topic will finally be deemed publishable in Nature or Science. You can bring in plenty of evidence of a small effect size, but without a theory to make everyone comfortable the evidence will be ignored. I *suspect* the same might be true of "cold fusion." There does seem to be quite a lot of evidence, but as yet no acceptable theory, so it's easier to assume it's a tale told by an idiot signifying nothing. But as I've said before, my dog isn't in that race so I don't know enough about the form at the track. Damien Broderick ##Here's a typical example of this sort of self-satisfied dismissal; I quote at some length from my book OUTSIDE THE GATES OF SCIENCE: From scerir at libero.it Sun Jan 10 18:09:25 2010 From: scerir at libero.it (scerir) Date: Sun, 10 Jan 2010 19:09:25 +0100 (CET) Subject: [ExI] Psi (no need to read this post you already know what it says) Message-ID: <12761044.404421263146965979.JavaMail.defaultUser@defaultHost> > hey, there are amazing experiments here > http://www.parapsych.org/online_psi_experiments.html > http://www.fourmilab.ch/rpkp/experiments To tell the truth I don't find anything very amazing about them, lots of people know how to type. John K Clark # Well, my score was good with the clock and the pendulum. But maybe you are right.. There are more concrete and amazing things, like the MWI, the anthropic principle, and the like. From spike66 at att.net Sun Jan 10 18:12:47 2010 From: spike66 at att.net (spike) Date: Sun, 10 Jan 2010 10:12:47 -0800 Subject: [ExI] Late to the subject In-Reply-To: <001e01ca921c$33225cb0$ad753644@sx28047db9d36c> References: <001e01ca921c$33225cb0$ad753644@sx28047db9d36c> Message-ID: <36BA15ECAD7547C09FA9F7394A649F2D@spike> On Behalf Of Frank McElligott Subject: [ExI] Late to the subject >...The Russian Gov't is preparing to Nuke an astroid that will be visiting the earth's orbit in 2016. The Russians quote an odd of 33 to 1 it will hit this planet. The United States bookmakers quote the odds at 2000 to 1... Frank Frank, nuking an asteroid is such a difficult flight control problem that I will predict failure should they attempt it. There is a cruel irony attached to these kinds of missions: if they succeed and the asteroid breaks up, if any part of the asteroid does manage to re-enter, then the commies will be liable. spike From jonkc at bellsouth.net Sun Jan 10 17:53:24 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 10 Jan 2010 12:53:24 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <483875.29296.qm@web36507.mail.mud.yahoo.com> References: <483875.29296.qm@web36507.mail.mud.yahoo.com> Message-ID: On Jan 9, 2010 Gordon Swobe wrote: > I think consciousness will likely turn out to be just a state that natural brains can enter not unlike water can enter a state of solidity. In a way I sort of agree with that, but I don't see why a computer couldn't do the same thing. And not all water is solid, it's a function of temperature. In your analogy what is the equivalent of temperature? We have enormously powerful evidence that it must be intelligence. We know from direct experience that there is a one to one correspondence between consciousness and intelligence; when we are intelligent we are conscious and when we are not intelligent, as in when we are sleeping or under anesthesia, we are not conscious. > Some people seem to think that if we can compute X on a computer then a computer simulation of X must equal X. But that's just a blatant non sequitur. So if I add 2+2 on my computer and you add 2+2 on your computer it's a blatant non sequitur to think that my 4 is the same as your 4. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Sun Jan 10 18:29:40 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 10 Jan 2010 13:29:40 -0500 Subject: [ExI] Psi (no need to read this post you already know what it says) In-Reply-To: <12761044.404421263146965979.JavaMail.defaultUser@defaultHost> References: <12761044.404421263146965979.JavaMail.defaultUser@defaultHost> Message-ID: <4e3a29501001101029q58b942fcua0c7d84dd8d4625f@mail.gmail.com> Maybe the brain can somehow sort entangled atoms into coherent mental structures that respond to the nonlocal squirming of other, similar structures--that, because of our place in time and space, we have a large number of these particles and have evolved a mental system to control them and even a "language" of sorts, probably close or identical to our brain's neurological language, allowing us to communicate there. This is easy to imagine from an evolutionary standpoint. Humans whose brains could communicate in the tiniest amounts could react sooner to threats and bond closer socially. The mechanism to entangle the entangled would grow in complexity, maybe even contributing to the growth of language. This could even imply the idea that telepathic scenarios often involve people close to the receiver. Maybe the number of shared particles. Can entangled particles be spread by phone or computer? The brain is already well known for the seemingly implausible things it manages to obtain, especially in the sorting category--memories and senses and all that. If the brain can manipulate the information carried by minute electrical pulses, why not allow it to recognize and use entangled particles? Particles that behave differently can be categorized by that nature. The brain could do it. I think this is theoretically possible, much moreso than other explanations, and that it could be an important missing piece in fields from linguistics to evolutionary biology to modern physics. Somebody who is learned, please lend us the answer as to if this is actually possible in quantum physics or if I just have a case of "wikipedia PhD." -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 10 18:59:16 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 12:59:16 -0600 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain Message-ID: <4B4A2384.7030700@satx.rr.com> New Scientist: You won't find consciousness in the brain 7 January 2010 by Ray Tallis [Raymond Tallis wrote a wonderful deconstruction of deconstruction and poststructuralism, NOT SAUSSURE] MOST neuroscientists, philosophers of the mind and science journalists feel the time is near when we will be able to explain the mystery of human consciousness in terms of the activity of the brain. There is, however, a vocal minority of neurosceptics who contest this orthodoxy. Among them are those who focus on claims neuroscience makes about the preciseness of correlations between indirectly observed neural activity and different mental functions, states or experiences. This was well captured in a 2009 article in Perspectives on Psychological Science by Harold Pashler from the University of California, San Diego, and colleagues, that argued: "...these correlations are higher than should be expected given the (evidently limited) reliability of both fMRI and personality measures. The high correlations are all the more puzzling because method sections rarely contain much detail about how the correlations were obtained." Believers will counter that this is irrelevant: as our means of capturing and analysing neural activity become more powerful, so we will be able to make more precise correlations between the quantity, pattern and location of neural activity and aspects of consciousness. This may well happen, but my argument is not about technical, probably temporary, limitations. It is about the deep philosophical confusion embedded in the assumption that if you can correlate neural activity with consciousness, then you have demonstrated they are one and the same thing, and that a physical science such as neurophysiology is able to show what consciousness truly is. Many neurosceptics have argued that neural activity is nothing like experience, and that the least one might expect if A and B are the same is that they be indistinguishable from each other. Countering that objection by claiming that, say, activity in the occipital cortex and the sensation of light are two aspects of the same thing does not hold up because the existence of "aspects" depends on the prior existence of consciousness and cannot be used to explain the relationship between neural activity and consciousness. This disposes of the famous claim by John Searle, Slusser Professor of Philosophy at the University of California, Berkeley: that neural activity and conscious experience stand in the same relationship as molecules of H[2]O to water, with its properties of wetness, coldness, shininess and so on. The analogy fails as the level at which water can be seen as molecules, on the one hand, and as wet, shiny, cold stuff on the other, are intended to correspond to different "levels" at which we are conscious of it. But the existence of levels of experience or of description presupposes consciousness. Water does not intrinsically have these levels. We cannot therefore conclude that when we see what seem to be neural correlates of consciousness that we are seeing consciousness itself. While neural activity of a certain kind is a necessary condition for every manifestation of consciousness, from the lightest sensation to the most exquisitely constructed sense of self, it is neither a sufficient condition of it, nor, still less, is it identical with it. If it were identical, then we would be left with the insuperable problem of explaining how intracranial nerve impulses, which are material events, could "reach out" to extracranial objects in order to be "of" or "about" them. Straightforward physical causation explains how light from an object brings about events in the occipital cortex. No such explanation is available as to how those neural events are "about" the physical object. Biophysical science explains how the light gets in but not how the gaze looks out. Many features of ordinary consciousness also resist neurological explanation. Take the unity of consciousness. I can relate things I experience at a given time (the pressure of the seat on my bottom, the sound of traffic, my thoughts) to one another as elements of a single moment. Researchers have attempted to explain this unity, invoking quantum coherence (the cytoskeletal micro-tubules of Stuart Hameroff at the University of Arizona, and Roger Penrose at the University of Oxford), electromagnetic fields (Johnjoe McFadden, University of Surrey), or rhythmic discharges in the brain (the late Francis Crick). These fail because they assume that an objective unity or uniformity of nerve impulses would be subjectively available, which, of course, it won't be. Even less would this explain the unification of entities that are, at the same time, experienced as distinct. My sensory field is a many-layered whole that also maintains its multiplicity. There is nothing in the convergence or coherence of neural pathways that gives us this "merging without mushing", this ability to see things as both whole and separate. And there is an insuperable problem with a sense of past and future. Take memory. It is typically seen as being "stored" as the effects of experience which leave enduring changes in, for example, the properties of synapses and consequently in circuitry in the nervous system. But when I "remember", I explicitly reach out of the present to something that is explicitly past. A synapse, being a physical structure, does not have anything other than its present state. It does not, as you and I do, reach temporally upstream from the effects of experience to the experience that brought about the effects. In other words, the sense of the past cannot exist in a physical system. This is consistent with the fact that the physics of time does not allow for tenses: Einstein called the distinction between past, present and future a "stubbornly persistent illusion". There are also problems with notions of the self, with the initiation of action, and with free will. Some neurophilosophers deal with these by denying their existence, but an account of consciousness that cannot find a basis for voluntary activity or the sense of self should conclude not that these things are unreal but that neuroscience provides at the very least an incomplete explanation of consciousness. I believe there is a fundamental, but not obvious, reason why that explanation will always remain incomplete - or unrealisable. This concerns the disjunction between the objects of science and the contents of consciousness. Science begins when we escape our subjective, first-person experiences into objective measurement, and reach towards a vantage point the philosopher Thomas Nagel called "the view from nowhere". You think the table over there is large, I may think it is small. We measure it and find that it is 0.66 metres square. We now characterise the table in a way that is less beholden to personal experience. Thus measurement takes us further from experience and the phenomena of subjective consciousness to a realm where things are described in abstract but quantitative terms. To do its work, physical science has to discard "secondary qualities", such as colour, warmth or cold, taste - in short, the basic contents of consciousness. For the physicist then, light is not in itself bright or colourful, it is a mixture of vibrations in an electromagnetic field of different frequencies. The material world, far from being the noisy, colourful, smelly place we live in, is colourless, silent, full of odourless molecules, atoms, particles, whose nature and behaviour is best described mathematically. In short, physical science is about the marginalisation, or even the disappearance, of phenomenal appearance/qualia, the redness of red wine or the smell of a smelly dog. Consciousness, on the other hand, is all about phenomenal appearances/qualia. As science moves from appearances/qualia and toward quantities that do not themselves have the kinds of manifestation that make up our experiences, an account of consciousness in terms of nerve impulses must be a contradiction in terms. There is nothing in physical science that can explain why a physical object such as a brain should ascribe appearances/qualia to material objects that do not intrinsically have them. Material objects require consciousness in order to "appear". Then their "appearings" will depend on the viewpoint of the conscious observer. This must not be taken to imply that there are no constraints on the appearance of objects once they are objects of consciousness. Our failure to explain consciousness in terms of neural activity inside the brain inside the skull is not due to technical limitations which can be overcome. It is due to the self-contradictory nature of the task, of which the failure to explain "aboutness", the unity and multiplicity of our awareness, the explicit presence of the past, the initiation of actions, the construction of self are just symptoms. We cannot explain "appearings" using an objective approach that has set aside appearings as unreal and which seeks a reality in mass/energy that neither appears in itself nor has the means to make other items appear. The brain, seen as a physical object, no more has a world of things appearing to it than does any other physical object. Profile Ray Tallis trained as a doctor, ultimately becoming professor of geriatric medicine at the University of Manchester, UK, where he oversaw a major neuroscience project. He is a Fellow of the Academy of Medical Sciences and a writer on areas ranging from consciousness to medical ethics From scerir at libero.it Sun Jan 10 19:02:54 2010 From: scerir at libero.it (scerir) Date: Sun, 10 Jan 2010 20:02:54 +0100 (CET) Subject: [ExI] Psi (no need to read this post you already know what it says) Message-ID: <20160872.394151263150174655.JavaMail.defaultUser@defaultHost> > Somebody who is learned, please lend us the answer as to if this is actually > possible in quantum physics or if I just have a case of "wikipedia PhD." > Will Steinberg I'm not learned enough but it seems to me that ... you need more than contemporary physics. Oh, wait, there is something here that might interest people on this list (for obvious reasons, you'll see). Compatibility of Contemporary Physical Theory with Personality Survival. http://www-physics.lbl.gov/~stapp/Compatibility.pdf Henry P. Stapp Theoretical Physics Group Lawrence Berkeley National Laboratory University of California Berkeley, California 94705 From thespike at satx.rr.com Sun Jan 10 19:19:59 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 13:19:59 -0600 Subject: [ExI] Psi (no need to read this post you already know what it says) In-Reply-To: <20160872.394151263150174655.JavaMail.defaultUser@defaultHost> References: <20160872.394151263150174655.JavaMail.defaultUser@defaultHost> Message-ID: <4B4A285F.9000504@satx.rr.com> On 1/10/2010 1:02 PM, scerir quoted: > Henry P. Stapp > Theoretical Physics Group > Lawrence Berkeley National Laboratory But, says John Clark, why should we pay any attention to this bozo, who is probably really a truck driver who failed high school and is just pretending to work for the Lawrence Berkeley National Lab, anyone can type. Which raises the key question (or a key question): What sort of evidence for psi phenomena would be publishable in Nature or Science, and how many replications by independent labs would be needed to make it acceptable? And to be acceptable, is it necessary that the scientists involved have no previous history of work in parapsychology? John failed to reply to my comment that once a reputable scientist or other academic reports apparent evidence for psi, he or she immediately falls into the "loony--safe to ignore" category. (Admittedly, some of the most distinguished scientists with an interest in psi do have a loony side, Nobelists included, and maybe they need to in order to get into that area of investigation to begin with. But we also know that Newton spent more time on astrology, alchemy and biblical codes than he did on physics and optics.) Damien Broderick From stefano.vaj at gmail.com Sun Jan 10 21:51:25 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 10 Jan 2010 22:51:25 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <205764.81556.qm@web36502.mail.mud.yahoo.com> References: <580930c21001100835s3f4529f7o1546c3d1ce1c209b@mail.gmail.com> <205764.81556.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21001101351q46b2790dx535efeae912e6019@mail.gmail.com> 2010/1/10 Gordon Swobe > A matter of faith? > Do you deny the existence of your own experience or its connection with > your brain? > The concept of "physical existence of subjective experience" sounds to me as philosophically very naive, and I am still waiting for a definition thereof. As for its connection with organic brains, I do not see its projection on them as much more persuasive than that on plenty of other physical phenomena. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 10 22:07:12 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 16:07:12 -0600 Subject: [ExI] widely, although not universally, believed Message-ID: <4B4A4F90.9080208@satx.rr.com> http://en.wikipedia.org/wiki/Unruh_effect Although Unruh's prediction that an accelerating detector would see a thermal bath is not controversial, the interpretation of the transitions in the detector in the non-accelerating frame are. It is widely, although not universally, believed that each transition in the detector is accompanied by the emission of a particle, and that this particle will propagate to infinity and be seen as Unruh radiation. The existence of Unruh radiation is not universally accepted. Some claim that it has already been observed,[7] while others claims that it is not emitted at all.[8] While the skeptics accept that an accelerating object thermalises at the Unruh temperature, they do not believe that this leads to the emission of photons, arguing that the emission and absorption rates of the accelerating particle are balanced. [I was very disheartened by this, because I thought for a couple of minutes that I might have found an unexpected source for the cosmic background radiation, especially in a still-accelerating cosmos] From bbenzai at yahoo.com Sun Jan 10 23:53:57 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 10 Jan 2010 15:53:57 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <55510.41327.qm@web113604.mail.gq1.yahoo.com> Stathis Papaioannou wrote: > 2010/1/11 Ben Zaiboc : > > If you hooked Google up to a robotic (or virtual) > body, and gave it the means to sense the environment, and > move the body, and hooked up words to actions, then it would > be capable of understanding (assigning meaning to) the > words, because they would now have a context. > > It gets a bit tricky when you talk about a virtual body in > a virtual > environment. There may be a mapping between what happens in > the > computer when it follows an instruction to move two metres > to the left > and moving two metres to the left in the real world, but > there is no > basis for saying that this is what the symbols in the > computer "mean", > since there are also other possible mappings. The meaning of 'two metres to the left' is tied up with signals that represent activating whatever movement system you use (legs, wheels etc.), feedback from that system, confirmatory signals from sensory systems such as differences of visual signals (that picture on the wall is now nearer for instance (as defined by such things as a change in its apparent size)), adjustments in your environment maps, etc, etc., that all fall into the appropriate category. Whether this information is produced by a 'real body' in the 'real world' or a virtual body in a virtual world makes absolutely no difference (after all, we may well be simulations in a simulated world ourselves. Some people think this is highly likely). I imagine it would lead to a pretty precise meaning for whatever internal signal, state or symbol is used for "two metres to the left". Once such a concept is established in the system in question, it can be available for use in different contexts, such as imagining someone else moving two metres to their left, recognising that an object is two metres to your left, etc. It seems to me that in a system of sufficient complexity, with appropriate senses and actuators, 'two metres to the left' is jam-packed with meaning. Ben Zaiboc From stathisp at gmail.com Mon Jan 11 00:43:02 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 11:43:02 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <53116.2093.qm@web36506.mail.mud.yahoo.com> References: <53116.2093.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/11 Gordon Swobe : > --- On Sat, 1/9/10, Stathis Papaioannou wrote: > >>> After leaving the hospital, the patient might tell you >> he believes in Santa Claus, but he won't actually "believe" >> in it; that is, he won't have a conscious subjective >> understanding of the meaning of "Santa Claus". >> >> He has no understanding of words before the operation, but >> he still has understanding! If he sees a dog he knows it's a dog, > > To think coherently about dogs or about anything else, one must understand words and this poor fellow cannot understand his own spoken or unspoken words or the words of others. At all. > > He completely lacks understanding of words, Stathis. Suffering from complete receptive aphasia, he has no coherent thoughts whatsoever. > > We can suppose less serious aphasias if you like, but to keep our experiment pure I have assumed complete receptive aphasia. > > With b-neurons or possibly with m-neurons we can cure him. We p-neurons we can only program him to speak and behave in a way that objective observers will find acceptable, i.e., we can program him to pass the Turing test. The patient with complete receptive aphasia *does* have coherent, if non-verbal thoughts. He can look at a situation, recognise what's going on, make plans for the future. That's thinking. But this is beside the point, as I'm sure you can see. It's easy to change the experiment so that any anatomically localised aspect of consciousness is taken out and replaced with zombie p-neurons. The original example was visual perception. Cram has all the neurons responsible for visual perception replaced and as a consequence (you have to say) he will be completely blind. However, he will behave as if he has normal vision, because the rest of his brain is receiving normal input from the p-neurons. Searle thinks Cram will be blind, notice he is blind, but be unable to do anything about it. This is only possible if Cram is able to think with something other than his brain, as you seem to realise, since you said that maybe Searle didn't really mean what he wrote or it was taken out of context to make him look bad. So there are only two remaining alternatives. One is that Cram is not blind but has perfectly normal vision, because you were wrong about the p-neurons lacking consciousness. The other is that Cram is blind but doesn't notice he is blind: honestly believes that nothing has changed and will tell you that you are crazy for saying he is blind when he can describe everything he sees as well as you can. Another example: we replace the neurons in Cram's pain centre with p-neurons, leaving the rest of the brain intact, then torture him. Cram screams and tells you to stop. You calmly inform him that he is deluded: since he has the p-neurons he isn't in pain, he only behaves and thinks he is in pain. So if you believe that it is possible to make p-neurons which behave just like b-neurons but lacking consciousness/understanding/intentionality then you are saying something very strange. You are saying that any conscious modality such as perception or understanding of language can be selectively removed from your brain (by swapping the relevant b-neurons for p-neurons) and not only will it not affect behaviour, you also will not be able to notice that it has been done. Initially you said that this thought experiment was so preposterous that you couldn't even think about it. Then you said that the p-neurons wouldn't actually behave like b-neurons because you don't believe consciousness is an epiphenomenon, which presents a difficulty because you think zombies are possible and the behaviour of the brain is computable. Later you seemed to be saying that the patient who gets the p-neurons will behave normally but won't notice that an aspect of his consciousness is gone because he will become a complete zombie and therefore won't notice anything at all. What is your latest take on what will happen? >> But you claim that Cram will actually have no understanding of >> "dog" despite all this. That is what seems absurd: what else could it >> possibly mean to understand a word if not to use the word appropriately >> and believe you know the meaning of the word? > > Although Cram uses the word "dog" appropriately after the operation, he won't believe he knows the meaning of the word, i.e., he will not understand the word "dog". If that seems absurd to you, remember that he did not understand it before the operation either. In this respect nothing has changed. He will hear the word "dog" and remember that he has to take his dog for a walk. If you ask him to draw a picture of a dog, a cat and a giraffe he will be able to do it. If you ask him to point to the tallest of the three animals he has drawn he will point to the giraffe. That sounds to me like understanding! What more could you possibly want of the poor fellow? -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 11 01:17:57 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 12:17:57 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <514861.16402.qm@web36507.mail.mud.yahoo.com> References: <514861.16402.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/1/11 Gordon Swobe : >> No-one claims that the brain is a digital computer, but it >> can be simulated by a digital computer. > > If you think simulations of brains on digital computers will have everything real brains have then you must think natural brains work like digital computers. But they don't. Gordon, it's sensible to doubt that a digital computer simulating a brain will have the consciousness that the brain has, since it isn't an atom for atom copy of the brain. What I have done is assume that it won't and see where it leads. It leads to the conclusion that any aspect of your consciousness that is anatomically localised can be selectively removed without your behaviour changing and without you noticing (using the rest of your brain) that there has been any change. This seems absurd, since at the very least, you would expect to notice if you suddenly lost your vision or your ability to understand language. So I am forced to conclude that the initial premise, that the brain simulation was unconscious, was wrong. There are only two other premises in this argument which could be disputed: that brain activity is computable and that consciousness is the result of brain activity. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 11 01:27:35 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 12:27:35 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> Message-ID: 2010/1/11 John Clark : > If the computer did understand the meaning you think the machine would > continue to operate exactly as it did before, back when it didn't have the > slightest understanding of anything. So, given that understanding is a > completely useless property why should computer scientists even bother > figuring out ways to make a machine understand??Haven't they got anything > better to do? Gordon has in mind a special sort of understanding which makes no objective difference and, although he would say it makes a subjective difference, it is not a subjective difference that a person could notice. -- Stathis Papaioannou From thespike at satx.rr.com Mon Jan 11 01:43:05 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jan 2010 19:43:05 -0600 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> Message-ID: <4B4A8229.4060803@satx.rr.com> On 1/10/2010 7:27 PM, Stathis Papaioannou wrote: > Gordon has in mind a special sort of understanding which makes no > objective difference and, although he would say it makes a subjective > difference, it is not a subjective difference that a person could > notice. I have a sneaking suspicion that what is at stake is volitional initiative, conscious weighing of options, the experience of assessing and then acting. Yes, we know a lot of this experience is illusory, or at least misleading, because a large part of the process of "willing" is literally unconscious and precedes awareness, but still one might hope to have a machine that is aware of itself as a person, not just a tool that shuffles through canned responses--even if that can provide some simulation of a person in action. It might turn out that there's no difference, once such a complex machine is programmed right, but until then it seems to me fair to suppose that there could be. None of this concession will satisfy Gordon, I imagine. Damien Broderick From stathisp at gmail.com Mon Jan 11 02:55:16 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 13:55:16 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <55510.41327.qm@web113604.mail.gq1.yahoo.com> References: <55510.41327.qm@web113604.mail.gq1.yahoo.com> Message-ID: 2010/1/11 Ben Zaiboc : > The meaning of 'two metres to the left' is tied up with signals that represent activating whatever movement system you use (legs, wheels etc.), feedback from that system, confirmatory signals from sensory systems such as differences of visual signals (that picture on the wall is now nearer for instance (as defined by such things as a change in its apparent size)), adjustments in your environment maps, etc, etc., that all fall into the appropriate category. > > Whether this information is produced by a 'real body' in the 'real world' or a virtual body in a virtual world makes absolutely no difference (after all, we may well be simulations in a simulated world ourselves. Some people think this is highly likely). ?I imagine it would lead to a pretty precise meaning for whatever internal signal, state or symbol is used for "two metres to the left". > > Once such a concept is established in the system in question, it can be available for use in different contexts, such as imagining someone else moving two metres to their left, recognising that an object is two metres to your left, etc. > > It seems to me that in a system of sufficient complexity, with appropriate senses and actuators, 'two metres to the left' is jam-packed with meaning. If we find an intelligent robot as sole survivor of a civilisation completely destroyed when their sun went nova, we can eventually work out what its internal symbols mean by interacting with it. If instead we find a computer that implements a virtual environment with conscious observers, but has no I/O devices, then it is impossible even in principle for us to work out what's going on. And this doesn't just apply to computers: the same would be true if we found a biological brain without sensors or effectors, but still dreaming away in its locked in state. The point is, there is no way to step outside of syntactical relationships between symbols and ascribe absolute meaning. It's syntax all the way down. -- Stathis Papaioannou From stathisp at gmail.com Mon Jan 11 03:10:24 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 14:10:24 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B4A8229.4060803@satx.rr.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> <4B4A8229.4060803@satx.rr.com> Message-ID: 2010/1/11 Damien Broderick : > I have a sneaking suspicion that what is at stake is volitional initiative, > conscious weighing of options, the experience of assessing and then acting. > Yes, we know a lot of this experience is illusory, or at least misleading, > because a large part of the process of "willing" is literally unconscious > and precedes awareness, but still one might hope to have a machine that is > aware of itself as a person, not just a tool that shuffles through canned > responses--even if that can provide some simulation of a person in action. > It might turn out that there's no difference, once such a complex machine is > programmed right, but until then it seems to me fair to suppose that there > could be. None of this concession will satisfy Gordon, I imagine. If you make a machine that behaves like a human then it's likely that the machine is at least differently conscious. However, if you make a machine that behaves like a human by replicating the functional structure of a human brain, then that machine would have the same consciousness as the human. If it didn't, it would lead to an absurd concept of consciousness as something that could be partly taken out of someone's mind without them either changing their behaviour or realising that anything unusual had happened. -- Stathis Papaioannou From lcorbin at rawbw.com Mon Jan 11 06:14:21 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 10 Jan 2010 22:14:21 -0800 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: <4B4A2384.7030700@satx.rr.com> References: <4B4A2384.7030700@satx.rr.com> Message-ID: <4B4AC1BD.2070607@rawbw.com> Damien Broderick wrote: > New Scientist: You won't find consciousness in the brain I don't understand what is being claimed here. An image developed on a photographic plate is similar in structure to some "extra-camera" physical object, and for me to have thoughts about an extra-cranial object seems similar. So how is this different? Also consider Well, a computer program, or even a pretty simple electromechanical device, can consult records! It seems likely to me that the writer is simply reiterating in some subtle way the desire on the part of many for a "first-person" account of consciousness. Which, I think, is impossible (for the simple reason that as soon as this account is recorded extra-cranially, it becomes objective and no longer first person). Lee From stathisp at gmail.com Mon Jan 11 06:22:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jan 2010 17:22:19 +1100 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: <4B4AC1BD.2070607@rawbw.com> References: <4B4A2384.7030700@satx.rr.com> <4B4AC1BD.2070607@rawbw.com> Message-ID: 2010/1/11 Lee Corbin : > It seems likely to me that the writer is simply > reiterating in some subtle way the desire on the > part of many for a "first-person" account of > consciousness. Which, I think, is impossible > (for the simple reason that as soon as this > account is recorded extra-cranially, it becomes > objective and no longer first person). I think he's also alluding to the "Hard Problem" of consciousness. The Hard Problem refers to the fact that whatever facts are discovered about the processes underlying consciousness it is always possible to say, "but why should that produce consciousness?" It's not a very helpful question if no possible answer can ever satisfy. -- Stathis Papaioannou From lcorbin at rawbw.com Mon Jan 11 07:09:35 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 10 Jan 2010 23:09:35 -0800 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> Message-ID: <4B4ACEAF.2080102@rawbw.com> Stefano writes > ...but another quite sobering (albeit from a > POV rather different from mine...) review can be found here: > http://io9.com/5422666/when-will-white-people-stop-making-movies-like-avatar Of all the reviews linked to so far, this may be addressing the most fundamental, or perhaps most profound, issue. Here's a part that I want to speak about: When will whites stop making these movies and start thinking about race in a new way? First, we'll need to stop thinking that white people are the most "relatable" characters in stories. As one blogger put it: By the end of the film you're left wondering why the film needed the Jake Sully character at all. The film could have done just as well by focusing on an actual Na'vi native who comes into contact with crazy humans who have no respect for the environment. I can just see the explanation: "Well, we need someone (an avatar) for the audience to connect with. A normal guy will work better than these tall blue people." However, this is the type of thinking that molds all leads as white male characters (blank slates for the audience to project themselves upon) unless your name is Will Smith. But more than that, whites need to rethink their fantasies about race. Whites need to stop remaking the white guilt story, which is a sneaky way of turning every story about people of color into a story about being white. Well, the problem goes very deep. People *like* to see stories about white people, because the sad fact is that white people have more status. So it's just as in the old days, hearing stories about kings rather than about paupers (Shakespeare, for example, told relatively few stories about entirely ordinary people). In *whatever* culture, it seems---though there have to be a few exceptions---people want their children to be whiter. And don't forget the Japanese, who treasured the whiteness of their women; and when it turned out to be undeniable that white men actually had complexions whiter than their own women, they went into denial about it for some time. Now if that weren't bad enough, *film*, i.e. the nature of film, is also in on the conspiracy against people of color. White faces simply show up much better than dark faces in movies or portraits. Perhaps this is indeed evidence of (a malevolent) God's existence after all: how else the double whammy? Speaking as a white person, I don't need to hear more about my own racial experience. Well, this isn't just about you, buster. I'd like to watch some movies about people of color (ahem, aliens), from the perspective of that group, without injecting a random white (erm, human) character to explain everything to me. Okay, go get a producer to make a movie consisting entirely of black people. The only reference in the movie to white people could be that "they all died out long ago from their own corporate greed and unfriendliness to their environment". Then just see how well your movie does at the box office. (I also imagine that it has cost Disney a pretty penny to present non-white centered characters in movies, animation, and cable series.) Look, the solution to the problem is simple, and we need only wait patiently a few more years. Soon one will be able to control the amount of color that children will be born with, and not long after that people will themselves be able to undergo whitening processes a lot cheaper, more effective, and easier than Michael Jackson's. Then everybody can be as white as they please, and films will become truly equal opportunity. Lee From lcorbin at rawbw.com Mon Jan 11 07:13:53 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 10 Jan 2010 23:13:53 -0800 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: References: <4B4A2384.7030700@satx.rr.com> <4B4AC1BD.2070607@rawbw.com> Message-ID: <4B4ACFB1.4020406@rawbw.com> Stathis writes > Lee wrote > >> It seems likely to me that the writer is simply >> reiterating in some subtle way the desire on the >> part of many for a "first-person" account of >> consciousness. Which, I think, is impossible >> (for the simple reason that as soon as this >> account is recorded extra-cranially, it becomes >> objective and no longer first person). > > I think he's also alluding to the "Hard Problem" of consciousness. Ah, yes. > The Hard Problem refers to the fact that whatever facts are discovered > about the processes underlying consciousness it is always possible to > say, "but why should that produce consciousness?" Well, thanks for that. I had never heard that rebuttal! It seems true. Very interesting. > It's not a very helpful question if no possible answer > can ever satisfy. Yeah, if that isn't a sign of a "bad question", I don't know what is. Lee From sjatkins at mac.com Mon Jan 11 09:23:12 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 11 Jan 2010 01:23:12 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: <2A83ABA6-AD1A-4883-AED1-60B70A95018C@mac.com> On Jan 9, 2010, at 10:54 AM, Gordon Swobe wrote: > --- On Sat, 1/9/10, Ben Zaiboc wrote: > >> In 'Are We Spiritual Machines?: Ray >> Kurzweil vs. the Critics of Strong AI', John Searle says: >> >> "Here is what happened inside Deep Blue. The computer has a >> bunch of meaningless symbols that the programmers use to >> represent the positions of the pieces on the board. It has a >> bunch of equally meaningless symbols that the programmers >> use to represent options for possible moves." >> >> >> This is a perfect example of why I can't take the guy >> seriously. He talks about 'meaningless' symbols, then >> goes on to describe what those symbols mean! He is >> *explicitly* stating that two sets of symbols represent >> positions on a chess board, and options for possible moves, >> respectively, while at the same time claiming that these >> symbols are meaningless. wtf? > > Human operators ascribe meanings to the symbols their computers manipulate. Sometimes humans forget this and pretend that the computers actually understand the meanings. We manipulate symbols ourselves that have no meaning except the one we assign. Worse, what we assign to most of our symbols is actually very murky, approximate and sloppy. Worse still the largest part of our mental processes are sub-symbolic, utterly unconscious output of a very lossy, buggy, biological computer programmed in large part just well enough to survive its environment and reproduce. > > It's an understandable mistake; after all it sure *looks* like computers understand the meanings. But then that's what programmers do for a living: we program dumb machines to make them look like they have understanding. > Human programmers are the primary reasons machines are not much smarter. The conscious explicit reasoning part of our brains is used for programming. It is notoriously weak, limited and only a small recently added experimental extension slapped on top of the original architecture. We can't explicitly program beyond the rather simplistic level we can debug. It is amazing our machines are as smart as they are with such constraints. > The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?" > How can you prove that you understand the meanings? > And Searle's answer is: "It won't happen from running formal syntactical programs on hardware as we do today, because computers and their programs cannot and will never get semantics from syntax. But that is just semantics! Sorry, couldn't resist. :) - samantha From sjatkins at mac.com Mon Jan 11 09:29:57 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 11 Jan 2010 01:29:57 -0800 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B48D71A.6010806@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> <4B48D71A.6010806@satx.rr.com> Message-ID: On Jan 9, 2010, at 11:20 AM, Damien Broderick wrote: > On 1/9/2010 12:48 PM, John Clark wrote: >> If a high school dropout who worked as the bathroom attendant at the zoo >> had a website and claimed to have made a major discovery about stem >> cells from an experiment described on that website I would not bother to >> read it. > > Neither would I, probably. When a biochemistry PhD and Research Fellow of the Royal Society, like Sheldrake, does so, I'd be less quick to dismiss his scientific report. Well, I have read bit of Sheldrake. The man rolled up something powerfully mind altering in his diplomas and smoked it as far as I can tell. Morphogenic fields and 100th monkey syndrome indeed. If this passes for science then I don't know why we think science can get us of the "demon haunted world". His reports are not expressed as science, are not verified by repeatable experiment, and do not fit well with existing knowledge or better explain most of what the existing knowledge has had good success explaining and making testable predictions about. I don't see that his credentials have a thing to do with it. - samantha From sjatkins at mac.com Mon Jan 11 09:35:17 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 11 Jan 2010 01:35:17 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B4A8229.4060803@satx.rr.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> <4B4A8229.4060803@satx.rr.com> Message-ID: <0B9EF00A-FE8B-4B0A-9030-E116EAA09A12@mac.com> On Jan 10, 2010, at 5:43 PM, Damien Broderick wrote: > On 1/10/2010 7:27 PM, Stathis Papaioannou wrote: > >> Gordon has in mind a special sort of understanding which makes no >> objective difference and, although he would say it makes a subjective >> difference, it is not a subjective difference that a person could >> notice. > > I have a sneaking suspicion that what is at stake is volitional initiative, conscious weighing of options, the experience of assessing and then acting. Yes, we know a lot of this experience is illusory, or at least misleading, because a large part of the process of "willing" is literally unconscious and precedes awareness, but still one might hope to have a machine that is aware of itself as a person, not just a tool that shuffles through canned responses--even if that can provide some simulation of a person in action. As I understand it a lot of the decision process is sub-/unconscious and the conscious mind rationalizes the results often. This does not mean that we are incapable of conscious logic and symbol manipulations, just that a lot of what we do isn't done that way. > It might turn out that there's no difference, once such a complex machine is programmed right, but until then it seems to me fair to suppose that there could be. None of this concession will satisfy Gordon, I imagine. No difference. If we were accidentally "programmed" to do whatever it is we do then there is no reason it could not be programmed on purpose, assuming our brains are just powerful enough. - samantha From stefano.vaj at gmail.com Mon Jan 11 11:52:48 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 11 Jan 2010 12:52:48 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <855333.87547.qm@web36505.mail.mud.yahoo.com> <4B48E373.9030607@satx.rr.com> Message-ID: <580930c21001110352v243cb942s45eba3f092f16c7e@mail.gmail.com> 2010/1/10 Stathis Papaioannou : > No-one claims that the brain is a digital computer, but it can be > simulated by a digital computer. The ideal analogue computer cannot be > emulated by a digital computer because it can use actual real numbers. > However, the real world appears to be quantised rather than > continuous, so actual analogue computers do not use real numbers. And > even if the world turned out to be continuous factors such as thermal > noise would make all the decimal places after the first few in any > parameter irrelevant, so there would be no need to use infinite > precision arithmetic to simulate an analogue device. I think there is little doubt that organic brains do compute things - besides exhibiting other features, such as burning glucides, which I take not to be of the essence in what we seek in a brain emulation. The fact that we can deal with ordinary arithmetics is a good enough example, I think. They do it rather poorly in some areas, and are much better in other. As to the digital/analog divide, I believe it has been shown well enough by Wolfram etc. that analog computers cannot do anything special that digital computers could not do. As to the quantum computing angle, organic brains are no better than digital computers in resolving classical problems which quantum computing should resolve, so supposing that they profit from quantum effects is tantamount to supposing that they profit from dark energy. What else remains to be said? Once the cerebral computations have been emulated, the issue of whether the emulation is "conscious" is not really different from wondering if it has a soul. The answer is social and cultural, not "factual". -- Stefano Vaj From gts_2000 at yahoo.com Mon Jan 11 12:42:41 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 11 Jan 2010 04:42:41 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <186811.743.qm@web36504.mail.mud.yahoo.com> --- On Sun, 1/10/10, Stathis Papaioannou wrote: > 2010/1/11 John Clark : > >> If the computer did understand the meaning you think >> the machine would continue to operate exactly as it did before, >> back when it didn't have the slightest understanding of anything. >> So, given that understanding is a completely useless property why >> should computer scientists even bother figuring out ways to make a >> machine understand??Haven't they got anything >> better to do? > > Gordon has in mind a special sort of understanding which > makes no objective difference and, although he would say it makes a > subjective difference, it is not a subjective difference that a person > could notice. Not so. The conscious intentionality I have in mind certainly does make a tremendously important subjective difference in every person. You and I have it but vegetables and unconscious philosophical zombies do not. It appears software/hardware systems also do not and cannot. John makes the point that one might argue that it does not matter from a practical point of view if software/hardware systems can or cannot have consciousness. I don't disagree, and I have no axe to grind on that subject. I don't pretend to defend strong AI research in software/hardware systems. My interest concerns the ramifications of the seeming hopelessness of strong AI research in s/h systems for us as humans. It tells us something important in the philosophy of mind. I wonder if everyone understands that if strong AI cannot work in digital computers then it follows that neither can "uploading" work as that term normally finds usage here. -gts From stefano.vaj at gmail.com Mon Jan 11 13:00:07 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 11 Jan 2010 14:00:07 +0100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <4B4ACEAF.2080102@rawbw.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> <4B4ACEAF.2080102@rawbw.com> Message-ID: <580930c21001110500j754b2a03j383fb7527da533bc@mail.gmail.com> 2010/1/11 Lee Corbin : > Look, the solution to the problem is simple, and we need > only wait patiently a few more years. Soon one will be > able to control the amount of color that children will > be born with, and not long after that people will > themselves be able to undergo whitening processes a lot > cheaper, more effective, and easier than Michael Jackson's. > Then everybody can be as white as they please, and films > will become truly equal opportunity. ... and we will be facing a dramatic loss of biodiversity. :-) It is true that "fairness" is often cross-culturally a sought-after beauty feature, but I think this may have to do, more than with some truly "universal" canon, with: - the status symbol arising from the indication that the individual concerned need not to work in the fields (interestingly, UVA salons started operations when the rich become those who spent more time outdoor than those enslaved in cavernous offices...); - the fact that everybody is somewhat fairer in its young age; - the "prestige" derived from the historical success of Europoids, not to mention the presence of their genes in the ruling classes of many areas in the world. In fact, it is customary in transhumanist circles to discuss (critically) the misdeeds of State eugenism. In truth, as pointed out by Habermas, the police need not really be around enforcing eugenic policies, since its intervention would be on the contrary required if the State ever decided to forbid parents to make use of technology to conform with social norms! This is why I think it is important, for those who are not keen on such an entropic process taking place at a global scale, to protect and foster cultural differences and a plurality of models and "optimality" views throughout different communities. Up to and beyond speciation, as far as I am concerned... ;-) -- Stefano Vaj From stathisp at gmail.com Mon Jan 11 13:04:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jan 2010 00:04:27 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <186811.743.qm@web36504.mail.mud.yahoo.com> References: <186811.743.qm@web36504.mail.mud.yahoo.com> Message-ID: 2010/1/11 Gordon Swobe : > I wonder if everyone understands that if strong AI cannot work in digital computers then it follows that neither can "uploading" work as that term normally finds usage here. This is true, and it's the main reason I've persevered with this thread. One day it may not be an abstract philosophical problem but a serious practical problem: you would want to be very sure before agreeing to upload that you're not killing yourself. For the reasons I've described, I'm satisfied that the philosophical problem is solved in favour of uploading and strong AI. Of course, there remains the far more difficult technical problem, and the possibility, however unlikely, that the brain is not computable. -- Stathis Papaioannou From stefano.vaj at gmail.com Mon Jan 11 13:13:38 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 11 Jan 2010 14:13:38 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B4A8229.4060803@satx.rr.com> References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> <4B4A8229.4060803@satx.rr.com> Message-ID: <580930c21001110513v7b9beeb4k52c18c29ecb6848c@mail.gmail.com> 2010/1/11 Damien Broderick : > Yes, we know a lot of this experience is illusory, or at least misleading, > because a large part of the process of "willing" is literally unconscious > and precedes awareness, but still one might hope to have a machine that is > aware of itself as a person, not just a tool that shuffles through canned > responses--even if that can provide some simulation of a person in action. The real difference between "canned" and "ad-hoc" responses, I believe, is simply in the numbers thereof. Both are certainly finite, and I suspect it is purely a matter of accuracy if we find emulations based on a small set of the first to be rough and unsatisfactory... Even though very simple organic brains (and perhaps very stupid human beings) behave not that differently. Another, entirely different issue is whether such "brute force" emulation is really the most practical way to develop persuasively "conscious" (that is, conscious tout court) entities. I do not remember who calculated that a Chinese room would offer a couple of minutes of "consciousness" in the entire expected duration of the universe... -- Stefano Vaj From gts_2000 at yahoo.com Mon Jan 11 13:42:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 11 Jan 2010 05:42:02 -0800 (PST) Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: <4B4A2384.7030700@satx.rr.com> Message-ID: <410987.34614.qm@web36503.mail.mud.yahoo.com> --- On Sun, 1/10/10, Damien Broderick wrote: > This author seems to argue for metaphysical dualism; for the existence of a mental world distinct from the world of matter. As he writes: > but my argument is not about > technical, probably temporary, limitations. It is about the > deep philosophical confusion embedded in the assumption that if > you can correlate neural activity with consciousness, then you have > demonstrated they are one and the same thing, and that a physical science > such as neurophysiology is able to show what consciousness truly > is. In other words, he argues like Descartes that matter in the brain does not have consciousness, that it must come from somewhere else and exist in some way above, beyond or outside matter. I suppose he must think mental phenomena come from god rather than from its neurological substrate, though he never says so explicitly. > This disposes of the famous claim by John Searle, Slusser > Professor of Philosophy at the University of California, Berkeley: > that neural activity and conscious experience stand in the same > relationship as molecules of H[2]O to water, with its properties of > wetness, coldness, shininess and so on. The analogy fails as the > level at which water can be seen as molecules, on the one hand, and > as wet, shiny, cold stuff on the other, are intended to correspond > to different "levels" at which we are conscious of it. But > the existence of levels of experience or of description > presupposes consciousness. Water does not intrinsically have these > levels. Here he misrepresents or misunderstands Searle. In my reading of Searle, he uses the example of the solid state of pistons in an engine as analogous to the conscious state of the brain (I thought I invented the water analogy on my own, admittedly a poor one now that I think of it, but here Searle is said to use it too). In any case, the property of solidity does not as this author tries to argue "presuppose consciousness". Solid objects have the physical property of impenetrability no matter whether anyone knows of it. Likewise liquid states have the property of liquidity and gaseous states have the property of gases independent of consciousness, and these are the sorts of analogies that Searle *actually* makes. This author either does not know this or else hopes the reader will not, and his entire argument depends on this false characterization. As I have it, consciousness has a first-person ontology and for the sake of saving the concept it ought not be *ontologically* reduced. However it may nonetheless be *causually* reduced to its neuronal substrate, something this author has a problem with. But in fact medical doctors do this all the time when discussing drugs and cures for illnesses that affect subjective experience. I have a tooth-ache this morning for example (I really do). I can take a pill for the conscious pain, and science can rightly concern itself with explanations as to why the pill works to kill the pain. Consciousness is in this way causally reducible to its neuronal substrate, even if it makes no sense to reduce it ontologically. Much confusion arises as a result of not understanding the need for a distinction between ontological and causal reduction. When considering almost anything else in the world aside from conscious experience, we simultaneously do an ontological and a causal reduction. -gts From pharos at gmail.com Mon Jan 11 14:29:32 2010 From: pharos at gmail.com (BillK) Date: Mon, 11 Jan 2010 14:29:32 +0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <186811.743.qm@web36504.mail.mud.yahoo.com> Message-ID: On 1/11/10, Stathis Papaioannou wrote: > This is true, and it's the main reason I've persevered with this > thread. One day it may not be an abstract philosophical problem but a > serious practical problem: you would want to be very sure before > agreeing to upload that you're not killing yourself. For the reasons > I've described, I'm satisfied that the philosophical problem is solved > in favour of uploading and strong AI. Of course, there remains the far > more difficult technical problem, and the possibility, however > unlikely, that the brain is not computable. > > I don't see this as a problem unless you insist that the human body/brain *must* be destroyed during the upload/copy process. I would be very interested in having a copy of my massive intellect running in one of these new netbooks (circa 2020). I would be reorganising, tuning, rebuilding routines, patching, etc. like mad. (And you thought patching Windows was bad!). I would prefer that it didn't have any 'consciousness' features as I don't appreciate my computer whining and bitching about the work I'm doing on it. BillK From ddraig at gmail.com Mon Jan 11 07:21:52 2010 From: ddraig at gmail.com (ddraig) Date: Mon, 11 Jan 2010 18:21:52 +1100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> Message-ID: 2010/1/11 Stefano Vaj : > Neither have I, and I am quite impatient to get it on 3d blu-ray. You will watch it in 3d at home? How? Dwayne -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From bbenzai at yahoo.com Mon Jan 11 16:18:00 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 11 Jan 2010 08:18:00 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <552968.20592.qm@web113618.mail.gq1.yahoo.com> Damien Broderick wrote: On 1/10/2010 7:27 PM, Stathis Papaioannou wrote: >> Gordon has in mind a special sort of understanding which makes no >> objective difference and, although he would say it makes a subjective >> difference, it is not a subjective difference that a person could >> notice. > > I have a sneaking suspicion that what is at stake is volitional > initiative, conscious weighing of options, the experience of assessing > and then acting. Yes, we know a lot of this experience is illusory, or > at least misleading, because a large part of the process of "willing" is > literally unconscious and precedes awareness, but still one might hope > to have a machine that is aware of itself as a person, not just a tool > that shuffles through canned responses--even if that can provide some > simulation of a person in action. It might turn out that there's no > difference, once such a complex machine is programmed right, but until > then it seems to me fair to suppose that there could be. None of this > concession will satisfy Gordon, I imagine. I have a very strong suspicion that a tool that shuffles through canned responses can never even approach the performance of a self-aware person, and only another self-aware person can do that. If you program your complex machine right, you will have to include whatever features give it self-awareness, consciousness, or whatever you want to call it. In other words, philosophical zombies are impossible, because the only way of emulating a self-aware entity is to in fact be one. Ben Zaiboc From bbenzai at yahoo.com Mon Jan 11 16:21:30 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 11 Jan 2010 08:21:30 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <591989.12968.qm@web113609.mail.gq1.yahoo.com> Stathis Papaioannou wrote: 2010/1/11 Ben Zaiboc : >> Whether this information is produced by a 'real body' in the 'real world' or a virtual body in a virtual world makes absolutely no difference (after all, we may well be simulations in a simulated world ourselves. Some people think this is highly likely). ?I imagine it would lead to a pretty precise meaning for whatever internal signal, state or symbol is used for "two metres to the left". > > If we find an intelligent robot as sole survivor of a civilisation > completely destroyed when their sun went nova, we can eventually work > out what its internal symbols mean by interacting with it. If instead > we find a computer that implements a virtual environment with > conscious observers, but has no I/O devices, then it is impossible > even in principle for us to work out what's going on. And this doesn't > just apply to computers: the same would be true if we found a > biological brain without sensors or effectors, but still dreaming away > in its locked in state. The point is, there is no way to step outside > of syntactical relationships between symbols and ascribe absolute > meaning. It's syntax all the way down. No argument here about it being syntax all the way down, as long as you apply this to 'real-world' systems as well as simulations. In your example, you may be right about us not being able to understand what's going on, because we don't inhabit that level of reality (the alien sim), and have no knowledge of how it works. But so what? Just because we don't understand it, doesn't mean that the virtual mind doesn't have meaningful experiences. Presumably the sim would map well onto the original aliens' 'real reality', though, which might baffle us initially, but would be a solvable problem, meaning that the sim would also in principle be solvable (unless you think we can never decipher Linear A). In a human-created sim, of course, we decide what represents what. Having written the sim, we can understand it, and relate to the mind in there. In this case, there's no difference (in terms of meaning and experience) between a pizza being devoured on level 1 or on level 2, as long as the pizza belongs to the same reality level as the devourer, and one level is well-mapped to the other. (I am of course, talking about proper simulations, with dynamic behaviour and enough richness and complexity to reproduce all the desired features at the necessary resolution, rather than a 'Swobian simulation', such as a photograph or a cartoon). Ben Zaiboc From jonkc at bellsouth.net Mon Jan 11 16:53:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 11:53:55 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4B4A1548.6050407@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com><398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com><6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com><0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> Message-ID: <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> On Jan 10, 2010, Damien Broderick wrote: >> After well over a century's effort not only have Psi >> "scientists" failed to explain how it works they haven't even shown that >> it exists. > > You *don't* know that, because you refuse to look at the published evidence## THAT IS NOT EVIDENCE THAT IS TYPING. There is no point in me looking at it I already know what it's going to say: I set up the experiment this way, I had these controls, all the people were honest and competent, I got these amazing results, and I was really really really really careful. The trouble is I have no way of knowing if one word of that is true, I don't even have a way of know if there was a experiment done at all, for all I know it's just an exercise in typing. > the usual objection to psi from heavy duty scientists at the Journal of Recondite Physics and Engineering is, "We don't care about your anomalies, because you don't have a *theory* that predicts and explains them." Damien, I think even you know that is Bullshit. Nobody has a theory worth a damn explaining the acceleration of the entire Universe and nobody predicted it, yet the fact of its existence is accepted by those foolish conservative stick in the mud mainstream scientists because the evidence is just so good. Before it was observed nobody predicted the existence of Dark Matter even though it is by far the most abundant form of matter in the universe, and to this day nobody can explain it, nevertheless those silly mainstream scientists believe it exists because the evidence is just so good. Nobody predicted X Rays before R?ntgen discovered them and neither he nor anybody else had a theory to explain them, but he became the most lionized physicist of his day and received the very first Nobel Prize in Physics. And even Darwin's Theory of Evolution, which has about as much emotion and prejudice aimed against it as it's possible for a Scientific theory to have, was accepted by the mainstream scientific community in only about a decade, and when he died Darwin was given a hero's funeral and buried in Westminster's Abbey right next to Newton. You are asking us to believe that third string experimenters using equipment that cost almost nothing can detect a vital new fact about our universe, and have actually been doing exactly that for centuries; and yet in all that time not one world class experimenter can repeat the feat and allow the fact of Psi to become generally accepted, and you expect this grotesque situation will continue for at least another year, hence your refusal to take my bet. Damien that just is not credible. > I *suspect* the same might be true of "cold fusion." I too suspect the same is true for cold fusion, more than suspect actually. > what happened in 2001 when the Royal Mail in Britain published a special brochure to accompany their issue of special stamps to commemorate British Nobel Prize winners. Dr. Brian Josephson, Nobel physics laureate in 1973, took the opportunity to draw attention to anomalies research: ?Quantum theory is now being fruitfully combined with theories of information and computation. These developments may lead to an explanation of processes still not understood within conventional science such as telepathy In the last 9 years microprocessors have become about 65 times as powerful, it was found that the Universe is expanding, and a probe was sent to Pluto, in the last 9 years what new advances have occurred in "these developments" that Josephson speaks of? Zero, nada, zilch, goose egg. > Josephson responded in the Observer newspaper on October 7, 2001: > > The problem is that scientists critical of this research do not give their normal careful attention to the scientific literature on the paranormal: it is much easier instead to accept official views or views of biased skeptics . . . Obviously the critics are unaware that in a paper published in 1989 in a refereed physics journal, Fotini Pallikari and I demonstrated a way in which a particular version of quantum theory could get round the usual restrictions against the exploitation of the telepathy-like connections in a quantum system. Another physicist discovered the same principle independently; so far no one has pointed out any flaws. I assume that Josephson is talking about the short paper "Biological Utilisation of Quantum NonLocality", well.... to find a flaw in something the thing in question must have some substance. That paper makes no predictions, suggests no new experiments, and contains not a single equation; its just a bunch of vague philosophical musings > An academic and science correspondent for the London Sunday Telegraph, Robert Matthews, commented sharply in November 1991: > "there is now a wealth of evidence for the existence of ESP" In the last 19 years microprocessors have become about 6500 times as powerful, how much has this "wealth of evidence" in support of ESP increased in that time? Zero, nada, zilch, goose egg. And who the hell is Robert Matthews? > It turns out quantum theory is right, Einstein?s wrong and that particles or systems that are in part of the same system, when apart, retain this nonlocal connection . . . If quantum theory is truly fundamental, then we may be seeing something analogous, even homologous, at the level of organisms. Insofar as people are thinking theories of telepathy, then this is one of the prime contenders. That is incorrect. Yes non local connections exist and yes you can instantly change something on the other side of the universe, but you can't use that fact to send information and that's what you'd need to do to make telepathy work. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 11 16:41:45 2010 From: spike66 at att.net (spike) Date: Mon, 11 Jan 2010 08:41:45 -0800 Subject: [ExI] i have been anticipating this development for years Message-ID: <6161400420EB4CA0BB6F51C2EBA1ACB7@spike> Not that I would buy one for myself, but in principle you understand: http://www.foxnews.com/scitech/2010/01/11/worlds-life-size-robot-girlfriend/ ?test=faces spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Mon Jan 11 17:41:52 2010 From: max at maxmore.com (Max More) Date: Mon, 11 Jan 2010 11:41:52 -0600 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto Message-ID: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> Presumably Emlyn and some others here will strongly disagree with Lanier's new book -- at least based on the interview included on the Amazon page... http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647/ref=pe_37960_14063560_as_txt_1/ From that interview, his views are worth pondering, but he does seem to be excessively anti-Web 2.0/collective wisdom. Max From lcorbin at rawbw.com Mon Jan 11 18:30:49 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 11 Jan 2010 10:30:49 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <575831.58228.qm@web36504.mail.mud.yahoo.com> <519FDC3E-9D11-41ED-A086-7703A817C47E@bellsouth.net> Message-ID: <4B4B6E59.7020205@rawbw.com> Stathis Papaioannou wrote: > 2010/1/11 John Clark : > >> If the computer did understand the meaning you think the machine would >> continue to operate exactly as it did before, back when it didn't have the >> slightest understanding of anything. So, given that understanding is a >> completely useless property why should computer scientists even bother >> figuring out ways to make a machine understand? Haven't they got anything >> better to do? > > Gordon has in mind a special sort of understanding which makes no > objective difference and, although he would say it makes a subjective > difference, it is not a subjective difference that a person could > notice. Well, no, those who wish to make the case for the possible existence of zombies are making a far stronger claim than that; they're claiming that there wouldn't even be a "noticer" at all. I.e., rather than any subjective difference, they posit simply there being no subject. To me, the claim in not at all incoherent, merely extremely unlikely. And a variation on what John Clark says above is this old argument: if "true understanding" doesn't make any difference, then why did evolution bother to manufacture it? Lee From steinberg.will at gmail.com Mon Jan 11 18:30:45 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 13:30:45 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> Message-ID: <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> A possible method for deducing the entanglement of particles: A "neuron trap" contains an ion, with spin x and a magnetic dipole d. If the spin reverses, the dipole will reverse and cause polarization of the neuron, which could be connected in parallel to another neuron, which would be connected to a larger analysis system; if the neuron pair shows polarization from within, the ion can be either integrated into molecules for transport to the entanglement zone or chelated and moved with some endocrine stuff. I don't know whether these entanglements could be actually understood or coordinated, but I think it would be true that, perhaps through atmospheric transmission, entangled gas atoms accrue between people in close physical contact and are all stored in the entanglement bank; when the system realizes many of its atoms are changing simultaneously based on the other person's (speculation speculation speculation,) it understands it. Maybe the system is wired to think, say, in binary, and constantly "broadcasts" a sequential, morse code-like binary throughout all entangled atoms and systems, but only receivable if one has the necessary amount of entanglement, furthered by contact. Would explain familial TP and all that stuff, and sometimes random TP because of chaotic coincidences. That makes sense and could be explained to have arisen through early humans or even animals; a pattern of X would make the tribe run away. Maybe language itself is an outpouring of this mental language where the simplicity of said language could not commonly express complicated concepts. The brain can do things that are much, much more complicated, like PRODUCE QUALIA AND CONSCIOUSNESS. So...this seems possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Jan 11 18:31:34 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 13:31:34 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> Message-ID: <4e3a29501001111031j1402e5e8h6ec4e2cb00f724cb@mail.gmail.com> Also to be noted--this is experiment-friendly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Jan 11 18:42:49 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 11 Jan 2010 13:42:49 -0500 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto In-Reply-To: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> References: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> Message-ID: On Mon, Jan 11, 2010 at 12:41 PM, Max More wrote: > Presumably Emlyn and some others here will strongly disagree with Lanier's > new book -- at least based on the interview included on the Amazon page... > > http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647/ref=pe_37960_14063560_as_txt_1/ > > From that interview, his views are worth pondering, but he does seem to be > excessively anti-Web 2.0/collective wisdom. >From the interview: "Collectivists adore a computer operating system called LINUX, for instance, but it is really only one example of a descendant of a 1970s technology called UNIX. If it weren?t produced by a collective, there would be nothing remarkable about it at all." Nobody is arguing that Linux's design is innovative, so, yes, what's remarkable about it is that it was produced by a collective. That's like saying there'd be nothing remarkable about Mozart if he wasn't a composer. "Meanwhile, the truly remarkable designs that couldn?t have existed 30 years ago, like the iPhone, all come out of "closed" shops where individuals create something and polish it before it is released to the public. Collectivists confuse ideology with achievement." Now that's just bullshit. The iPhone is a *good* design, but it's not remarkable. It's the almost inevitable result of a long string of technological developments. Apple's "closed shop" enabled it to beat the "collectivist" competition to market, but not by much. -Dave From thespike at satx.rr.com Mon Jan 11 19:31:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jan 2010 13:31:17 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> Message-ID: <4B4B7C85.5030600@satx.rr.com> On 1/11/2010 12:30 PM, Will Steinberg wrote: > Maybe language itself is an outpouring of this mental language where the > simplicity of said language could not commonly express complicated > concepts. Possibly something like that. The experience from Star Gate and elsewhere suggests strongly that psi does not handle detailed alphanumerics well, and often flips images right to left; the main feature is a powerful correlation with entropy gradients in the target. This is why remote viewing and Ganzfeld protocols are more effective in eliciting informative coincidences than the classic boring card-guessing experiments. In a sense, psi is *all* semantics--more paradigmatics and less syntagmatics (in the terminology of semiotics).## Damien Broderick ## e.g. From thespike at satx.rr.com Mon Jan 11 19:48:28 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jan 2010 13:48:28 -0600 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7B92A9BF-CA20-45D7-ABFD-D7E4FD2A1C19@GMAIL.COM> <79471C131D7F4EE28EE05A725ED29AED@spike> <4B461B25.6070405@satx.rr.com> <398dca511001070955u45b9f60at2b4ed79fc5cedc13@mail.gmail.com> <02c601ca8fc6$cff83fd0$fd00a8c0@cpdhemm> <398dca511001071201k167524b1y7215a39e3cfb3ec2@mail.gmail.com> <6405055B9B8D4FF2AEFD28B62B2C8223@spike> <4B4690D0.5000200@satx.rr.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <7F05158A-9D49-40CF-9859-CC363E193A1E@bellsouth.net> <4B48D71A.6010806@satx.rr.com> Message-ID: <4B4B808C.8050204@satx.rr.com> On 1/11/2010 3:29 AM, Samantha Atkins wrote: > Well, I have read bit of Sheldrake. The man rolled up something powerfully mind altering in his diplomas and smoked it as far as I can tell. Morphogenic fields and 100th monkey syndrome indeed. If this passes for science then I don't know why we think science can get us of the "demon haunted world".< I agree. Hence my comparison with Newton's alchemy etc. But do try to decouple the zany attempts to explain from the careful (or sloppy, if that's demonstrably the case) experiments. > His reports are not expressed as science,< What do you object to in the phone experiments I cited? They seem very clear, clean, objective. Unless he and all the participants were just making it up. > are not verified by repeatable experiment,< Of course they are, and have been. > and do not fit well with existing knowledge or better explain most of what the existing knowledge has had good success explaining and making testable predictions about.< Yes, that's the real problem, in my view. But now you've slipped back to invoking his half-baked hypotheses rather than the empirical evidence of his experiments (and those of others replicating them, or which he's replicating). Many skeptics are delighted to hear that Randi has totally debunked the claims that some dogs have a significantly higher than chance likelihood of knowing when their humans are coming home, even when those arrivals are scheduled randomly. Randi declared that his own experiments had shown this was BULLSHIT, and that Sheldrake's were bogus. Well, he did until it was demonstrated (and he finally admitted) that he actually *hadn't* ever done such trials himself, and he was wrong about Sheldrake's. I don't have a strong opinion about the claim one way or the other, but I'm always amused and astonished by the way professional doubters like Randi just *make stuff up* and get away with it. It's precisely what John Clark asserts about the psi claimants--such people can talk and type, and that's it. The locus classicus was the sTARBABY debacle (it's easy to look it up), where again the substantive issue is irrelevant; what's salient is the way CSICOPS scrambled to deny and hide their own confirmatory findings. Strong opinions on both sides often lead to wild bogosity; it's not just the Sheldrakes who need to be tested for probity. Damien Broderick From jonkc at bellsouth.net Mon Jan 11 20:56:31 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 15:56:31 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> Message-ID: <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> On Jan 11, 2010, Will Steinberg wrote: > Maybe the system is wired to think, say, in binary, and constantly "broadcasts" a sequential, morse code-like binary throughout all entangled atoms and systems That won't work because quantum entanglement can't transmit information, it can change things at a distance but you need more than that to send a message, you also need a standard to measure that change against, and that is where entanglement falls short. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 11 21:07:03 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jan 2010 15:07:03 -0600 Subject: [ExI] quantum entanglement In-Reply-To: <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> Message-ID: <4B4B92F7.2090903@satx.rr.com> On 1/11/2010 2:56 PM, John Clark wrote: > quantum entanglement can't transmit information, it can change things at > a distance but you need more than that to send a message, you also need > a standard to measure that change against, and that is where > entanglement falls short. Isn't the message problem that you can't *force* a predictable change upon part of an entangled system? If A's particle spin is up, then B's is down, okay, you know that--but A can't *make* her particle go spin up when she wants it to without breaking the entanglement. No? Damien Broderick From steinberg.will at gmail.com Mon Jan 11 21:13:56 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 16:13:56 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> Message-ID: <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> 2010/1/11 John Clark > On Jan 11, 2010, Will Steinberg wrote: > > That won't work because quantum entanglement can't transmit information, it > can change things at a distance but you need more than that to send a > message, you also need a standard to measure that change against, and that > is where entanglement falls short. > The observer effect can still effect (I had to) information, just not information with any pushing power--but this is not needed for communication. If, because fatalism hold true, the entangled qubits produce an informational response within the brain, that same response can be predicted given a standard-ish "mental language." I may not be able to cause a particle to spin, but I think many (Especially you, John) would agree that "I" can not cause my hands to move or a sentence to form, given causality. We are already energic outbursts of little bits and bots trying to wiggle their way back to entropy, formed only by the motion of energy through us like an electron transport chain. The brain makes decisions and THEN we interpret them; this is fact; this has been tested using fMRI. Extension to another observational medium can't be too hard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Jan 11 21:19:54 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 16:19:54 -0500 Subject: [ExI] quantum entanglement In-Reply-To: <4B4B92F7.2090903@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4B4B92F7.2090903@satx.rr.com> Message-ID: <4e3a29501001111319j12811ea4v857051e55f98ba61@mail.gmail.com> On Mon, Jan 11, 2010 at 4:07 PM, Damien Broderick wrote: > On 1/11/2010 2:56 PM, John Clark wrote: > > quantum entanglement can't transmit information, it can change things at >> a distance but you need more than that to send a message, you also need >> a standard to measure that change against, and that is where >> entanglement falls short. >> > > Isn't the message problem that you can't *force* a predictable change upon > part of an entangled system? If A's particle spin is up, then B's is down, > okay, you know that--but A can't *make* her particle go spin up when she > wants it to without breaking the entanglement. No? > > Right, which is why Psi has to be based on observation of a root signal causing identital changes or predictions in one or more people, maybe leading to drastic conclusions on truly "random" mental occurences governing our thoughts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 11 21:49:29 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 16:49:29 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> Message-ID: <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> On Jan 11, 2010, Will Steinberg wrote: > The observer effect can still effect (I had to) information, just not information with any pushing power--but this is not needed for communication. I don't understand what that means. > I think many (Especially you, John) would agree that "I" can not cause my hands to move or a sentence to form Actually I don't agree. As I said before I think that saying that I scratched my nose because I wanted to is a perfectly correct way to describe the situation, as is saying the balloon expanded because the pressure inside it increased, its just that those are not the only way to describe what is going on. "I" is a high level description of what a hundred billion neurons are doing, and "pressure" is a high level description of what millions of billions of trillions of atoms are doing. > given causality That is not a given, modern Physics says causality is bunk. And after all, why should all events have a cause? John k Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 11 21:26:12 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 16:26:12 -0500 Subject: [ExI] quantum entanglement In-Reply-To: <4B4B92F7.2090903@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4B4B92F7.2090903@satx.rr.com> Message-ID: <8561859D-33F3-4253-ADF4-786AB9AE8E9A@bellsouth.net> On Jan 11, 2010, Damien Broderick wrote: > Isn't the message problem that you can't *force* a predictable change upon part of an entangled system? Yes. Suppose that you and I had a pair of dice that were entangled in a Quantum Mechanical way, you are in Australia and I am in the USA. We both roll our dice and we write down the numbers, after hundreds of rolls we both examine the numbers and we both conclude that they agree with the laws of probability and nothing unusual has occurred. However if I get on a jet and fly to Australia and show you my list of numbers we find that my list is identical with your list. That is weird, but changing one apparently "random" event to another apparently "random" event is no way to send a message. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 11 21:58:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jan 2010 15:58:14 -0600 Subject: [ExI] quantum entanglement In-Reply-To: <8561859D-33F3-4253-ADF4-786AB9AE8E9A@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4B4B92F7.2090903@satx.rr.com> <8561859D-33F3-4253-ADF4-786AB9AE8E9A@bellsouth.net> Message-ID: <4B4B9EF6.7090506@satx.rr.com> On 1/11/2010 3:26 PM, John Clark wrote: > However if I get on a jet and fly to Australia and show you my list of > numbers we find that my list is identical with your list. That is weird That *is* weird, because I'm in San Antonio and have been for years. :) From steinberg.will at gmail.com Mon Jan 11 22:10:19 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 17:10:19 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> Message-ID: <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> I don't even care about whether things have a cause, just that everything that happens causes something. For example (this one is probably false but illustrates the idea:) The entanglement reading system is built. It so happens that a bunch of entangled atoms spin a certain way in many different brains. Two people, Isaac Newton and Gottfried Leibniz, have brains that are wired in similar manners because of their upbringings and work. Patterns produced by random spins, reshuffled at breakneck speeds, happens to, for a nanosecond, chance upon something that is roughly equal to "understand calculus!" in brainguage. The random spins can, because of their quantity, chance upon certain concepts that are picked up by multiple people. A bit spins in my brain and also spins in yours that tells us what lists to write; I don't send mine to you but they have the same root cause. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 11 22:14:47 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 17:14:47 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <186811.743.qm@web36504.mail.mud.yahoo.com> Message-ID: <32D6B15A-F581-4DE2-A889-B1538B627A8B@bellsouth.net> On Jan 11, 2010 Stathis Papaioannou wrote: > One day it may not be an abstract philosophical problem but a > serious practical problem Truer words have never been spoken. I think my chance of surviving the meat grinder called "The Singularity" is very low, almost zero. But the chance of someone surviving The Singularity who has not overcome the soul superstition is exactly zero, regardless of what euphemism they prefer for the word soul. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Mon Jan 11 22:18:27 2010 From: scerir at libero.it (scerir) Date: Mon, 11 Jan 2010 23:18:27 +0100 (CET) Subject: [ExI] Psi (no need to read this post you already know what it says) Message-ID: <14435781.528241263248307850.JavaMail.defaultUser@defaultHost> Will Steinberg: >A possible method for deducing the entanglement of particles: [...] To my knowledge the possible (but questionable) role of entanglement in physiology (i.e. human eye) has been discussed in very very few papers written by *good* physicists. The interest in entanglement depends on recent experiments with two macroscopic states (localized in two space-like separated sites) which become non-locally correlated having interacted - in the past - with an entangled couple of single-particles (micro-macro entanglement). Quantum experiments with human eyes as detectors based on cloning via stimulated emission http://arxiv.org/abs/0902.2896 -Pavel Sekatski, Nicolas Brunner, Cyril Branciard, Nicolas Gisin, Christoph Simon Abstract: We show theoretically that the multi-photon states obtained by cloning single-photon qubits via stimulated emission can be distinguished with the naked human eye with high efficiency and fidelity. Focusing on the "micro-macro" situation realized in a recent experiment [F. De Martini, F. Sciarrino, and C. Vitelli, Phys. Rev. Lett. 100, 253601 (2008)], where one photon from an original entangled pair is detected directly, whereas the other one is greatly amplified, we show that performing a Bell experiment with human-eye detectors for the amplified photon appears realistic, even when losses are taken into account. The great robustness of these results under photon loss leads to an apparent paradox, which we resolve by noting that the Bell violation proves the existence of entanglement before the amplification process. However, we also prove that there is genuine micro-macro entanglement even for high loss. Towards Quantum Experiments with Human Eyes Detectrors Based on Cloning via Stimulated Emission? http://arxiv.org/abs/0912.3110 -Francesco De Martini Abstract: We believe that a recent, unconventional theoretical work published in Physical Review Letters 103, 113601 (2009) by Sekatsky, Brunner, Branciard, Gisin, Simon, albeit appealing at fist sight, is highly questionable. Furthermore, the criticism raised by these Authors against a real experiment on Micro - Macro entanglement recently published in Physical Review Letters (100, 253601, 2008) is found misleading and to miss its target. Quantum superpositions and definite perceptions:envisaging new feasible experimental tests http://arxiv.org/abs/quant-ph/9810028 -GianCarlo Ghirardi Abstract: We call attention on the fact that recent unprecedented technological achievements, in particular in the field of quantum optics, seem to open the way to new experimental tests which might be relevant both for the foundational problems of quantum mechanics as well as for investigating the perceptual processes. From jonkc at bellsouth.net Mon Jan 11 22:23:59 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 11 Jan 2010 17:23:59 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> Message-ID: <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> On Jan 11, 2010, Will Steinberg wrote: > I don't even care about whether things have a cause, just that everything that happens causes something. On that I think you are on much firmer ground. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Jan 11 22:32:01 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 17:32:01 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> Message-ID: <4e3a29501001111432r3d1910b5nb8914d556d9d8857@mail.gmail.com> > I don't even care about whether things have a cause, just that everything > that happens causes something. > > > On that I think you are on much firmer ground. > > John K Clark > > And from this ground I stand and look out on undiscovered pastures. When I chance to have a "synchronous" moment with another human being, something happens that *feels* like knowing, and discounting intuition is discounting subconscious understanding. I used to run TP tests with some friends using at first six words and then nine; results were incredible until we started rolling dice. But they were still interesting, just not to the point where I could statistically verify, especially given trials. But there is something that happens--when I knew I was going to know, it felt stronger, like a voice shouting "Orange!" in the back of my head. It would be wise to record, in experiments, before the answer is given, whether the person feels like it is right or just guessing; it is my prediction that results would show interesting correlations. If anyone is interesting in conducting well-thought out TP studies (perhaps not perfectly experimental but enough to get a glimpse of the interesting,) I do pretty much nothing all the time and would be happy to further the bounds of knowledge, whether this means proof or disproof. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 11 22:55:50 2010 From: spike66 at att.net (spike) Date: Mon, 11 Jan 2010 14:55:50 -0800 Subject: [ExI] quantum entanglement In-Reply-To: <4B4B9EF6.7090506@satx.rr.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B46A73E.2090200@satx.rr.com> <0AB97B96F36746E79B5AE261B3960142@spike> <4B46B6DF.50504@satx.rr.com> <8BF0812307DF4DB5A2C71BA55A635B80@spike> <4B46C2BB.5000003@satx.rr.com> <413C445C-6CBF-4295-BD4F-809CA32F0E64@bellsouth.net> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4B4B92F7.2090903@satx.rr.com><8561859D-33F3-4253-ADF4-786AB9AE8E9A@bellsouth.net> <4B4B9EF6.7090506@satx.rr.com> Message-ID: <7A7C562B81054DB19547A7874805DB7F@spike> > ...On Behalf Of Damien Broderick ... > > However if I get on a jet and fly to Australia... > > ...I'm in San Antonio and have been for years. :) You can take the mate out of Australia, but... From pharos at gmail.com Mon Jan 11 23:59:34 2010 From: pharos at gmail.com (BillK) Date: Mon, 11 Jan 2010 23:59:34 +0000 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: <4e3a29501001111432r3d1910b5nb8914d556d9d8857@mail.gmail.com> References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <4B4A1548.6050407@satx.rr.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> <4e3a29501001111432r3d1910b5nb8914d556d9d8857@mail.gmail.com> Message-ID: On 1/11/10, Will Steinberg wrote: > > When I chance to have a "synchronous" moment with another human being, > something happens that feels like knowing, and discounting intuition is > discounting subconscious understanding. I used to run TP tests with some > friends using at first six words and then nine; results were incredible > until we started rolling dice. You need the dice. People are very bad at trying to choose digits or colours so as to make a random list. That's why when people see a string of heads, heads, heads, heads, heads - they almost automatically say 'the next one *must* be tails'. Two people trying to guess at random will produce far more matches than you would expect by chance alone purely because the guessing mechanism in their brains is very similar and is not random. < But they were still interesting, just not to > the point where I could statistically verify, especially given trials. But > there is something that happens--when I knew I was going to know, it felt > stronger, like a voice shouting "Orange!" in the back of my head. That's called self-delusion. Gambling addicts suffer from it a lot. BillK From steinberg.will at gmail.com Tue Jan 12 00:20:40 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 19:20:40 -0500 Subject: [ExI] Psi (no need to read this post you already knowwhatitsays ) In-Reply-To: References: <4e3a29501001061315t717f6263kfb462f8280750e4a@mail.gmail.com> <7635E8FD-6FCD-48AF-B5C8-2990CCC291DF@bellsouth.net> <4e3a29501001111030s4be1c8fbg9acb8d353dc5f28e@mail.gmail.com> <44A01953-FFFC-48C4-93D5-0F3A7E8D947C@bellsouth.net> <4e3a29501001111313s41d12b51i6158b1a9dc8e1c5c@mail.gmail.com> <0D517119-FF1B-4242-83AD-326E9DD827FD@bellsouth.net> <4e3a29501001111410x7ebc1ee6hb2acb97d14055dbe@mail.gmail.com> <3FA43452-4DDB-4C5C-A40D-4E24AC1A54D6@bellsouth.net> <4e3a29501001111432r3d1910b5nb8914d556d9d8857@mail.gmail.com> Message-ID: <4e3a29501001111620o71d17804r8652bf568a2f9ddc@mail.gmail.com> On Mon, Jan 11, 2010 at 6:59 PM, BillK wrote: > Two people trying to guess at random will produce far more matches > than you would expect by chance alone purely because the guessing > mechanism in their brains is very similar and is not random. > > Which is why there is merit in the fact that random entanglement patterns can produce similar mental patterns that seem like communication, but are actually just prediction, and are what we cal psi or synchronicity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jan 12 01:24:42 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jan 2010 12:24:42 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <591989.12968.qm@web113609.mail.gq1.yahoo.com> References: <591989.12968.qm@web113609.mail.gq1.yahoo.com> Message-ID: 2010/1/12 Ben Zaiboc : > Presumably the sim would map well onto the original aliens' 'real reality', though, which might baffle us initially, but would be a solvable problem, meaning that the sim would also in principle be solvable (unless you think we can never decipher Linear A). > > In a human-created sim, of course, we decide what represents what. ?Having written the sim, we can understand it, and relate to the mind in there. No, we can never even in principle decipher Linear A unless we have some clue extraneous to the actual texts. We could map any meaning to it we want. It need not even be consistent: there is no reason why the creators of Linear B, hoping to befuddle future archaeologists, could not have made the same symbol mean different things in different parts of the text. A text or code has no life of its own. It's of trivial interest that multiple meanings could be attached to it: the important thing is to work out the originally intended meaning. With computations, however, the situation may be different. If it is an inputless program the meaning ascribed to it by the original programmer has no magical potency to affect it; any other meaning that could possibly be mapped to it, including a meaning that changes from moment to moment, is just as good. This means that any physical activity which could, under some mapping, be seen as implementing a computation is implementing that computation. Like interpreting a text according to an arbitrary encoding this is a trivial observation in general, but it becomes interesting when we consider computations that create their own observers in virtual environments. -- Stathis Papaioannou From steinberg.will at gmail.com Tue Jan 12 01:50:52 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 11 Jan 2010 20:50:52 -0500 Subject: [ExI] Psychoactives Message-ID: <4e3a29501001111750t322331ffoabc2dda69413fb98@mail.gmail.com> How does everyone here feel about the use of psychoactives to achieve novel thinking patterns? Even discounting medical applications (DXM for Downs, Cannabis for cancer, mushrooms for migraines, LSD for OCD,) psychoactives play an important role in human thought, especially when it comes to understanding systemic or logical concepts--they are ideal tools for finding solutions, and factored into the double-helix discovery and the invention of the PCR, as wall as numerous computer science dealies. Marijuana tips the scales of the mind towards creativity and allows for new thought processes; tryptamines and phenethylamines and some weird guys like Salvinorin A blow it out of the water completely and introduce the mind to pure possibility. Given that many of you were teenagers right around when this stuff was HUGE, I would think that some of your jaded adolescent minds latched onto Leary and found that, while the assumed prophets of drugs might be spinning their wheels about DMT-world talking basketballs, psychoactives themselves could be a useful tool in ideation, for the simple fact that they DO make new mental patterns and connection and that, given the brain's logical system, new patterns and equations means new possibility of understanding--visualizing 4D objects is only as difficult as the brain producing visual equations compatible with a four-coordinate system (or however it does the stuff it does.) Do you smart folk have your own psychonautic (though the term has been adopted by some of the sillier people) agendas? Or are there still limits in this slice of intelligentsian delight? (I hope not) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Jan 12 02:17:35 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 11 Jan 2010 18:17:35 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <436311.9101.qm@web36505.mail.mud.yahoo.com> --- On Mon, 1/11/10, Stathis Papaioannou wrote: >> I wonder if everyone understands that if strong AI >> cannot work in digital computers then it follows that >> neither can "uploading" work as that term normally finds >> usage here. > > This is true, and it's the main reason I've persevered with this thread. I think you deserve a medal for your sincere and honest efforts in pursuing these questions with me. Thank you. I wonder if digital computers will ever appreciate these kinds of things that I appreciate about you. -gts From emlynoregan at gmail.com Tue Jan 12 04:18:20 2010 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 12 Jan 2010 14:48:20 +1030 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto In-Reply-To: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> References: <201001111742.o0BHg7i9002156@andromeda.ziaspace.com> Message-ID: <710b78fc1001112018x18bb39aoa679539ff87a5db0@mail.gmail.com> 2010/1/12 Max More : > Presumably Emlyn and some others here will strongly disagree with Lanier's > new book -- at least based on the interview included on the Amazon page... > > http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647/ref=pe_37960_14063560_as_txt_1/ > > From that interview, his views are worth pondering, but he does seem to be > excessively anti-Web 2.0/collective wisdom. > > Max Wow, what a depressing interview. It's very "Hey You Kids Get Off My Lawn!" I saw this post, and read the link, before my morning coffee, and actually started running around the house shouting, it made me so angry. I've tried to calm down before posting :-) > "Question: You argue the web isn?t living up to its initial promise. How has the internet transformed our lives for the worse? > Jaron Lanier: The problem is not inherent in the Internet or the Web. Deterioration only began around the turn of the century with the rise of so-called > "Web 2.0" designs. These designs valued the information content of the web over individuals. It became fashionable to aggregate the expressions of > people into dehumanized data. There are so many things wrong with this that it takes a whole book to summarize them. Here?s just one problem: It > screws the middle class. Only the aggregator (like Google, for instance) gets rich, while the actual producers of content get poor. This is why newspapers > are dying. " Aggregator is bullshit business speak, a reframing to make people providing a service look bad. What he means is search engines. Why is all the traffic going through them? Because it's a better way to find stuff. The real problem here is that the grouping we call a newspaper is irrelevant; it's a centuries old invention about distributing information on paper (there's a clue in the name, news*paper*). I recently read a marketing person talking about what's happening as the debranding of content; no matter how hard the marketers try, people (who they like calling consumers) just wont go to brands first and find things from there, they insist on going to the faceless search engines and searching everything all at once. It turns out people just don't care about these brands. Newspapers are dying because the heavily optimised business model is broken and unrepairable. The search engines actually send traffic to the newspapers *for free*. They don't have to do that. Here's some more on this: http://publishing2.com/2007/05/29/should-google-subsidize-journalism/ Actually "aggregator" is a good term for newspapers. They take a lot of unrelated things and aggregate them. Their business model relies on this, and to the extent that this is undermined they are screwed. What users are doing is ignoring the aggregation (a whole newspaper) and cherry picking the good stuff; they are being disaggregated. Incidentally, this is what is also hurting the music industry (you can only sell songs now, not albums), and is why the movie business is doing well (you can't disaggregate a movie). As to the middle class, well, if you are working in an obsolete job, get a clue and do something else. If there is a systemic inability for you to be able to do any middle class work at all, maybe the system needs changing? But I talk about this more below. > It might sound like it is only a problem for creative people, like musicians or writers, but eventually it will be a problem for everyone. When robots > can repair roads someday, will people have jobs programming those robots, or will the human programmers be so aggregated that they essentially > work for free, like today?s recording musicians? Web 2.0 is a formula to kill the middle class and undo centuries of social progress." (more running and shouting) This isn't even wrong, it's so bad. Yes, there's a problem coming up, it's the SF dream of years ago, rushing up to smash us in the face; the dream of automating away all labour. We've already automated agricultural work away, at least in the developed world, and manufacturing is going that way (largely done or offshored in the developed world, and I read bits and pieces here and there that chinese manufacturing is beginning to automate heavily rather than add people, for instance). But I suspect Lanier doesn't mourn those peasant and working class jobs going by the wayside. The internet is a *fundamental breakthrough*. Web 2.0 is just a bit more unfolding of that. I think we will probably dedicate most of the 21st century to this continuing unfolding, and it'll be upheaval all the way. Upheaval doesn't mean, hey, stuff will be interesting and fun and you'll be able to buy cooler iPhones, it means that stuff we take for granted as fixed will change. Now I think he's conflated two issues in the paragraph above. First, that creative people's jobs are threatened. Second, that all work will disappear. That creative people's jobs are threatened now is true to an extent. There are two major types of work threatened here, one apparently threatened but probably actually strengthened, and one actually threatened. The first one is high profile people/orgs who live by creating some "content" and selling copies ad infinitum. There is a lot of wringing of hands over this, but in fact there's no evidence that anyone is suffering. It is true that people's business models may need to change, but it's not all that hard in fact. The clever people are noticing that if you separate out the scarce from the non-scarce stuff (copies are non-scarce, while scarce are personalised things, timely things such as events, automated server based services, etc), then you can use the non-scarce stuff to get reputation, and reputation can be used to sell the scarce stuff. So famous musicians now make money from touring and use the recording to sell the touring, rather than the other way around, for instance. In the end, this is a small group of people, running businesses which are not exempt from environmental shifts, but who have places they can shift into and be just fine. The second, actually threatened group, is people doing massively duplicated and substitutable creative work. These are usually again from a 20th century business model, and were born of the tyranny of geography, requiring the same work to be replicated over and over in different locales. Examples are newspaper photographers, some types of graphic design, some types of media monitors?, many of the non-journalist creative jobs in newspapers (eg: advice columns, horoscopes, laying out the classified ads). Also, it's not always geography, but commercial pricing and the inability or unwillingness of most commercial entities to work together or make interoperable stuff, that gives us other groups; people who write dictionaries or encyclopedias, many types of packaged software development, and in-house development which is in packaged software space. These people's jobs are being destroyed in the 21st century. Free/cheap stock photo collections continue to hammer the mundane body of photography, graphic design is downloadable so there is less market for the low end, media is endlessly aggregated online. All the pieces of newspapers that aren't journalism are better done online, and don't need to be redone over and over. Wikipedia eats the encyclopedias, dictionaries are replaced by online equivalents, packaged software is eaten by open source. (Next on the chopping block: Universities, whose cash cow, the undergrad degree, will be replaced with cheap/free alternative, and scientific journals, which are much better suited by free online resources if the users can just escape the reputation network effect of the existing closed journals) The only real threatened jobs are where people are doing low value crap. Padding. High value stuff will remain. For example, to the extent that journalists are actually useful (and this is highly arguable; journalists are generalists, good at making it look like they know more than they do, in an age where we can see the primary sources and hear direct from the experts), they will be preserved, if they provide a service that can't be substituted. Eg: local reporting might be pulled together under umbrella organisations who monetize that somehow, but more likely it'll be crowdsourced from the actual local people. But you can't pad any longer. You can't make an album of 2 hits and 13 pieces of crap; no one will buy the crap. People will read your great articles, but you can't sell a whole newspaper which is mostly pointless crap and ads (a great analysis by Clay Shirky here: http://www.shirky.com/weblog/2009/10/rescuing-the-reporters/). You can sell blockbuster movies, but the poor quality filler ones will bomb like never before, people know it sucks even before release. You can sell a great novel, but the market for 20 volume tolkeinesque extruded product will eventually fail (books are a bit behind the curve due to the slow take up of eBooks, but that's happening now). You can sell fantastic innovative software like photoshop, or sibelius, but you can't endlessly resell your office applications which have become commoditised, and you can no longer make money making cruddy little CD burning apps which should be freely available utilities. Back to the paragraph above, he then mentions robots taking away all the work. Well, that's been a dream for a long time. And the problem is not that people will miss the work, it is how will we live, ie: get money, without jobs? Good question, a really big question that needs answering. But paid work isn't good in itself; it's by definition stuff you do because you are being paid, and probably wouldn't do otherwise, a necessary evil. It's really hard to support holding on to the concept of paid work if it stops being needed for production. Finally though, it's a huge leap from uninspired duplicated and substitutable paid creative work being in trouble, to all jobs will disappear. Between here and there is such a long, windy, obscured path that the one can't shed light on the other. > Question: You say that we?ve devalued intellectual achievement. How? > Jaron Lanier: On one level, the Internet has become anti-intellectual because Web 2.0 collectivism has killed the individual voice. Rubbish. The Web is full of individual voices. Web 2.0 collectivism (eg: wikipedia) is small compared to the number of articles, blog posts, comments, which have an author. It's just that it's a big world, with a *lot* of individual voices, so getting a mass audience is tougher. Also, many of the individual voices are new ones who are being heard. I wish I could find Dr Ben Goldacre's article on this, where he talks about the hopelessness of science journalism, and the way you can now get your information directly from the researchers, because they're blogging about it, for free. A good example is the blog of Fields medalist Terrance Tao: http://terrytao.wordpress.com/ > It is increasingly disheartening to write about any topic in depth these days, because people will only read what the first link from a search engine > directs them to, and that will typically be the collective expression of the Wikipedia. Why do they only read that? Because mostly we just need a good, impartial summary and Wikipedia does that wonderfully. But nothing is stopping anyone from writing. It's easier to publish than ever before. So that can't be the objection. The objection above is actually that it is hard to be read. I propose that it is actually no more difficult to be read now than it ever was. It just looks that way to people who are used to having someone publish their work, because they had broken through what was previously the most difficult boundary; the publishing industry gatekeepers. Now that this is no longer as relevant, of course there is more to read, so more competition to be heard amongst all that. But that's no more difficult for writers and potential writers overall, just better for readers. > Or, if the issue is contentious, people will congregate into partisan online bubbles in which their views are reinforced. I don?t think a collective > voice can be effective for many topics, such as history--and neither can a partisan mob. Collectives have a power to distort history in a way > that damages minority viewpoints and calcifies the art of interpretation. Only the quirkiness of considered individual expression can cut through > the nonsense of mob--and that is the reason intellectual activity is important. I think we were always partisan, just like we were always stupid. It's just that now, we can see *everyone*. So we can see the other partisan groups, and we can see the stupid stuff, in a way that was largely hidden before. I also think things are actually improving. For example, we've always had conspiracy theories, but now we can see these things, and laugh and prod at them. Think about how scientology has declined as the laser light of the networked people has been beamed into it. > On another level, when someone does try to be expressive in a collective, Web 2.0 context, she must prioritize standing out from the crowd. To do > anything else is to be invisible. Therefore, people become artificially caustic, flattering, or otherwise manipulative. Compete for attention. That's the essence of the modern world. You used to be able to buy your way to success (eg: big content industries could vertically lock up the market, or at least work together cartel-wise to keep out others). Now you increasingly cannot. So, you need to work at being more interesting, in order to garner interest. But then that paragraph doesn't make sense. To be "expressive" you must prioritise "standing out from the crowd"? Non sequitur. To be expressive, you express. That's really unrelated to audience. To get attention, you might need to concentrate on standing out from the crowd, but that's unrelated to being expressive. The real lament here is about the difficulty of getting attention. Yeah, it's tough, suck it up. > Web 2.0 adherents might respond to these objections by claiming that I have confused individual expression with intellectual achievement. > This is where we find our greatest point of disagreement. I am amazed by the power of the collective to enthrall people to the point of > blindness. Collectivists adore a computer operating system called LINUX, for instance, but it is really only one example of a descendant of > a 1970s technology called UNIX. If it weren?t produced by a collective, there would be nothing remarkable about it at all. Fuck me sideways, this old chestnut. Linux is far too big an enterprise to easily generalize about, but it was never about being an intellectual achievement, or being remarkable. It was a response to the frustration that such banal infrastructure as operating systems required paying rent to commercial interests, and were closed, so people couldn't modify it, fix it, see the internal working and better optimise their work to it, etc etc etc. This was such a painful problem that technical people rebuilt the whole thing from the ground up to be free for everyone forever. That people did this is testament to how crappy the situation was before. Linux massively supports individual freedom. You can go get a copy, and do whatever you want with it, without reference to anyone. Don't like DRM? You don't have to have it. Don't like govt agency backdoors? In principle you can be sure they are not there (and I think so in practice, but I'm assuming enough other eyeballs, and could be wrong). Don't want to be held over a barrel by corporate interests? You don't have to. Want to adapt it to run on your quirky piece of hardware? You can. Want to do your own bizarre shit? You can. Constrast that with any of the closed OSs. Open source does provide plenty that's innovative, but mostly it's not been about that, it's been about freedom, the pure icy cold frosty chocolate libertarian kind. > Meanwhile, the truly remarkable designs that couldn?t have existed 30 years ago, like the iPhone, all come out of "closed" shops where individuals create > something and polish it before it is released to the public. Collectivists confuse ideology with achievement. iPhones are lovely, but one of the worst examples of closed shop thinking around. If open source is the bazaar, then the iPhone is a shiny shopping mall. Stuff like that is anti-human; it comes out of a place where there are no people, just consumers. I do think they've been a nice demonstration of what you can do with a bunch of new technologies (accelerometers, cheap 3G, cheap touch screens, cheap good quality cameras), but think of what they can't do. For example, what justification can their possibly be for not having phones transparently use WiFi for phone calls when in range, and 3G only when there is no other option? The hardware can do it, people want it. It's BS. People often (often!) say to me that Google will be the next Microsoft, and turn into bastards. Maybe. I'm far more worried about Apple, which has always been a closed shop, hostile to 3rd parties and into fleecing the consumer; Microsoft has always been a better choice in terms of freedom. > Question: Why has the idea that "the content wants to be free" (and the unrelenting embrace of the concept) been such a setback? What dangers > do you see this leading to? > Jaron Lanier: The original turn of phrase was "Information wants to be free." And the problem with that is that it anthropomorphizes information. > Information doesn?t deserve to be free. It is an abstract tool; a useful fantasy, a nothing. It is nonexistent until and unless a person experiences it > in a useful way. What we have done in the last decade is give information more rights than are given to people. If you express yourself on the > internet, what you say will be copied, mashed up, anonymized, analyzed, and turned into bricks in someone else?s fortress to support an > advertising scheme. However, the information, the abstraction, that represents you is protected within that fortress and is absolutely sacrosanct, > the new holy of holies. You never see it and are not allowed to touch it. This is exactly the wrong set of values. (more running and shouting) Information enriches us. It is pushing strongly in the direction of free not because some ideologues want it to, but because a billion people, empowered by a digital information network joining them all together, want it to be so. Wikipedia exists not because some ideologues want it, but because it's stupendously useful to a billion people, and some of them work to keep it happening. The collection of all of the world's music exists on Youtube not because some ideologues want it (in fact, there don't seem to be any focussed groups who do, and some that definitely don't), but because it's an incredible wealth that a billion people really want. I'm really hammering the keys now, pissed off. This massive collection of searchable, quality information on the net (including the brilliant ways the web 2.0 technologies have found to rise the cream to the top), represents an astounding increase in the absolute wealth of humanity, absolutely flabbergasting. Who has not had their daily lives changed by being able to look up almost any fact at a moment's notice, hear any music, get advice on any topic? Think of the way we used to hoard computer and programming manuals, encyclopedias, dusty books on arcane subjects, paltry collections of LPs and CDs. If we define wealth as access to and power over stuff, is there any across the board increase in absolute wealth that compares to what we've had in the last 10 years, in all of human history? What you say will be copied: this is a platform for perfectly copying digital information. That it will be mashed up: good god, it's a genetic algorithm at work. Information is being worked on by it. People do the mutating and recombining (and sometimes inject new pieces supposedly from whole cloth). The fitness function is how well it competes for attention amongst people (largely, on how good/crap it is). If mashups are crappy, they'll disappear into the morass of crap on the internet. If they are better, they then have better claim to attention than the original work. Your ego is your problem. I've seen a lot of people making similar complaints to Lanier in the last year or so. Bono's recent suggestion that we use draconian Chinese-style monitoring and control online to prop up his CD revenues comes to mind. They all seem to share some suspicious similarities: they are relatively wealthy, and at least part of their income is passive income from IP. I think for wealthy people, this increase in absolute wealth isn't a big deal, because it's absolute; relative to what they already had, it's a rounding error. If Bono wanted access to all the world's music, he could just get a little man to go buy it all for him. Wealthy people can maintain massive private libraries. Many of the areas where we've had improvement are around things that didn't cost much, but randomly accessing all of it (all the scientific journals, or all the books, or all the CDs) was cost prohibitive, unless you were wealthy. And I guess of course, if you make a lot of your money from opening royalty checks, well, you don't want that to stop. That income from IP (which requires people respecting your personal brand) is a lot more important to you than this absolute increase for everyone, which you can barely notice. > The idea that information is alive in its own right is a metaphysical claim made by people who hope to become immortal by being uploaded > into a computer someday. It is part of what should be understood as a new religion. That might sound like an extreme claim, but go visit > any computer science lab and you?ll find books about "the Singularity," which is the supposed future event when the blessed uploading is > to take place. A weird cult in the world of technology has done damage to culture at large. Well there you go. Jaron Lanier hates us in particular. "a new religion". Wanker. Show me this damage. Where are we poorer in information terms today than we ever were. Where is our culture(s) weaker, more degenerate? > Question: In You Are Not a Gadget, you argue that idea that the collective is smarter than the individual is wrong. Why is this? > Jaron Lanier: There are some cases where a group of people can do a better job of solving certain kinds of problems than individuals. One > example is setting a price in a marketplace. Another example is an election process to choose a politician. All such examples involve > what can be called optimization, where the concerns of many individuals are reconciled. There are other cases that involve creativity and > imagination. A crowd process generally fails in these cases. The phrase "Design by Committee" is treated as derogatory for good reason. > That is why a collective of programmers can copy UNIX but cannot invent the iPhone. A collective of volunteer programmers will not invent the iPhone because it is an essentially corporate beast. It is shiny and lowest common denominator. It emphasises features that can make money (Make it rich in the App Store!), and hides those that wont (eg: voip over wifi). It is non-hackable in any useful way. It is not only not user serviceable, but barely serviceable even by Apple. You can't strip the apple software out and install your own thing on it. It's a symbol of the corporate desire for the perfect consumer, who knows nothing, slavers over the shiny thing, swallows the mass marketed message. A wallet with legs. Volunteer programmers primarily create things that they themselves want and will use. Linux is derided as hard to use on the desktop; that's because it's made by people who don't want to cater to the lower common denominator. The whole attitude that you shouldn't have to know anything, you can just buy stuff, is a product of 20th century mass market consumerism, it's anti human and wrong. The point of the internet is that you can know anything, and can certainly inform yourself quickly enough to get along in most anything as needed. "Consumer" is a euphemism for "Mark", it's not something to aspire to, it's failure. If we have evolved to be anything noble, it is to be creative, constructive creatures. That's why a corporation can make an operating system but can never invent Linux. > In the book, I go into considerably more detail about the differences between the two types of problem solving. Creativity requires periodic, > temporary "encapsulation" as opposed to the kind of constant global openness suggested by the slogan "information wants to be free." > Biological cells have walls, academics employ temporary secrecy before they publish, and real authors with real voices might want to polish > a text before releasing it. In all these cases, encapsulation is what allows for the possibility of testing and feedback that enables a quest > for excellence. To be constantly diffused in a global mush is to embrace mundanity. No one is stopping you from doing this. In fact anyone doing anything thoughtful does this; create in private, polish, then release. The internet is largely individual voices, individual works, and that will not go away any time soon. That there are collective approaches being deployed doesn't attack the individual voice. Remember that every one of those collective approaches comes down to an individual or small group who had an idea, "what if we created an environment where people collaborate in such and such a way"? It's seems to me such a confused idea, that you can't privately create in the web 2.0 world. Of course you can. Any guide to getting an open source project off the ground will tell you first to create something that works, then try showing that to people and see if they're interested in helping. That you can control what you publish - well, that's a whole different thing. Short answer is "no" of course. The real thing that is difficult for creative people is the global competition. It's not creating in isolation that's hard, it's the same as it ever was. It's getting anyone to care once you want to bring your creation into the light. You can do your best work, and find not only can you not sell it, you can't give it away for free, because people have so much choice, and of such quality, that you have to work really hard to get people to even glance at your thing. But that's just old fashioned competition, ratcheted up to the scale of a billion+ potential competitors. If you care about doing quality work, you can still do that. To the extent that it's hard to be paid to do it, exactly when was the mythical glorious past where it was easy? If you care about getting your ego stroked, getting personal attention, well yeah, get in line, and be ready for that to be really hard. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From emlynoregan at gmail.com Tue Jan 12 05:07:41 2010 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 12 Jan 2010 15:37:41 +1030 Subject: [ExI] Ecosia - is this BS? Message-ID: <710b78fc1001112107k44441477xcd10662430825e81@mail.gmail.com> My bogometer is in the red. Please read and critique. http://www.businessgreen.com/business-green/news/2254326/bing-backs-world-greenest http://ecosia.org/ Should we translate this as "Microsoft greenwashes Bing, hapless WWF lends support"? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From max at maxmore.com Tue Jan 12 06:34:16 2010 From: max at maxmore.com (Max More) Date: Tue, 12 Jan 2010 00:34:16 -0600 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto Message-ID: <201001120634.o0C6YSv2005154@andromeda.ziaspace.com> Emlyn: Your long and angry response to the Lanier interview was stimulating. Sorry if my posting that distracted you from other tasks and drew you into a lengthy broadside... Although I disagreed with some of what you said in an earlier post on a related topic, I found that I agreed with almost everything you said in your current response. The one obvious point on which I disagree is this: >Why do they only read that? Because mostly we just need a good, >impartial summary and Wikipedia does that wonderfully. I agree that Wikipedia is generally excellent for non-controversial topics. But for controversial topics, both my personal experience and reading about others' experiences says that it does *not* do a wonderful job of being impartial. Cliques of editors can and do exert great control over content, making it very difficult for anyone outside their clique to make changes. They enjoy the appearance of broad input without the reality. (Different editorial policies and incentives might change this, but that's the way it's been for years now.) Apart from that, yes, Lanier seems both anti-transhumanist, pretty much anti-technological progress, and ultimately deeply conservative -- and not in any good sense. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From jonkc at bellsouth.net Tue Jan 12 06:40:38 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 01:40:38 -0500 Subject: [ExI] Psi. (no need to read this post you already know what it says ) In-Reply-To: <4B4910E2.7060300@satx.rr.com> References: <630266.35889.qm@web180203.mail.gq1.yahoo.com> <4B4910E2.7060300@satx.rr.com> Message-ID: On Jan 9, 2010, Damien Broderick wrote: > I also find some of Sheldrake's theories over the top or silly, but (1) that has nothing to do with his experiments OF COURSE THAT HAS SOMETHING TO DO WITH HIS EXPERIMENTS! If the man is known for being a fool then it's not unreasonable to think his experiments may have been foolishly performed, assuming he did any experiment at all and didn't just go straight to the typewriter. > if a scientist with a solid background speaks up for psi, it *means* he's a lunatic/gullible/lying etc, so you don't need to consider anything further that he says. And you think this ridiculous situation has continued unabated for centuries. Damien that is Bullshit, just Bullshit. > So why aren't their papers published in Nature and Science? Suppose stem cell papers were routinely sent for review to Jesuits at the God Hates Abortion Institute at Notre Dame Good God almighty! I would estimate that those two journals published about 60% of the Scientific discoveries made in the 20'th century, and you are comparing them to some religious rag. BULLSHIT! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Tue Jan 12 06:48:57 2010 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 12 Jan 2010 17:18:57 +1030 Subject: [ExI] Jaron Lanier's new book, You Are Not a Gadget: A Manifesto In-Reply-To: <201001120634.o0C6YSv2005154@andromeda.ziaspace.com> References: <201001120634.o0C6YSv2005154@andromeda.ziaspace.com> Message-ID: <710b78fc1001112248n172d4eeen675df63edfa0ed3b@mail.gmail.com> 2010/1/12 Max More : > Emlyn: Your long and angry response to the Lanier interview was stimulating. > Sorry if my posting that distracted you from other tasks and drew you into a > lengthy broadside... I was supposed to be cleaning the house, and instead spent many hours writing that. Thanks :-) Although I'd like to know where my house cleaning robots are; it's 2010 ffs. > Although I disagreed with some of what you said in an > earlier post on a related topic, I found that I agreed with almost > everything you said in your current response. > > The one obvious point on which I disagree is this: > >> Why do they only read that? Because mostly we just need a good, impartial >> summary and Wikipedia does that wonderfully. > > I agree that Wikipedia is generally excellent for non-controversial topics. > But for controversial topics, both my personal experience and reading about > others' experiences says that it does *not* do a wonderful job of being > impartial. Cliques of editors can and do exert great control over content, > making it very difficult for anyone outside their clique to make changes. > They enjoy the appearance of broad input without the reality. (Different > editorial policies and incentives might change this, but that's the way it's > been for years now.) Yes, that's true. With wikipedia, I personally get most value from non-controversial topics anyway; I wouldn't go there to understand contempory US politics, but I might go there to understand the meaning and history of a term like, say, utilitarianism. The nice thing though is that it is in no way a monopoly. Wikipedia is largely arrived at through google searches, and so is the rest of the web, so if you really disagree with it, you can post endless rebuttals of its articles, as much as your heart desires. I don't think it gets any special treatment as far as search rankings go. > > Apart from that, yes, Lanier seems both anti-transhumanist, pretty much > anti-technological progress, and ultimately deeply conservative -- and not > in any good sense. > > Max The few things I'd read from Lanier previously, I'd quite liked. I'm disappointed. I *think* he's a bit of a lefty, politically, but I'm not sure. If there's any clue that networked humanity is something new under the sun, politically, socially and economically, it is in the fact that the left and right hate it with equal vehemence. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From jonkc at bellsouth.net Tue Jan 12 07:07:13 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 02:07:13 -0500 Subject: [ExI] Avatar: misanthropy in three dimensions. In-Reply-To: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> Message-ID: <43D0DF0D-1A46-4FF0-BC57-B5ED1A87C0A8@bellsouth.net> On Jan 9, 2010, at 1:22 PM, Max More wrote: > > Comments from anyone who has seen the movie? (I haven't yet.) Very good movie. Of course corporations and technology are evil and tree huggers know no vice, but that is mandatory for any movie set in the future so that didn't bother me; and evil or not the technology displayed is amazing, parts of it are quite beautiful. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Tue Jan 12 07:58:55 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 11 Jan 2010 23:58:55 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <591989.12968.qm@web113609.mail.gq1.yahoo.com> Message-ID: <4B4C2BBF.3070307@rawbw.com> Stathis writes > A text or code has no life of its own. It's of trivial interest that > multiple meanings could be attached to it: the important thing is to > work out the originally intended meaning. My favorite chapter of G?del, Escher, Bach is "The Location of Meaning". Hofstadter points out, essentially, that there are two kinds of meaning: conventional and isomorphic (I'm not sure after all these years whether the terminology is mine or his). You speak here of conventional meaning---meaning which operates by convention. Our convention for "z-e-b-r-a" is the large striped African mammal, though obviously those letters could be assigned to something else. Isomorphic meaning, however, is not at all arbitrary. The depth and jiggles in the grooves of a vinyl playing record have, in some cases, an objective isomorphism to the first movement of Beethovan's fifth symphony. That's their undeniable meaning, no two ways about it. > With computations, however, the situation may be different. > If it is an inputless program the meaning ascribed to it by > the original programmer has no magical potency to affect it; > any other meaning that could possibly be mapped to it, > including a meaning that changes from moment to moment, > is just as good. I believe I disagree here. If the computation is isomorphic to some other ultimate entity, then that's its meaning. We need only worry about the fidelity. To use the often heard rainstorm analogy, an exactly detailed computation of that rainstorm may not make anyone wet, but if there is anything to how the rainstorm feels, then we claim that the program feels the same way. > This means that any physical activity which could, under some > mapping, be seen as implementing a computation is implementing that > computation. Like interpreting a text according to an arbitrary > encoding this is a trivial observation in general, but it becomes > interesting when we consider computations that create their own > observers in virtual environments. Well, I'll nit-pick the first sentence here: I think it generally false that "*any* physical activity which could, under some mapping, be seen as implementing a computation is implementing that computation". We must not, after all, put "too much work" into finding such a mapping. For, if we do, then the Theory of Dust becomes acceptable, and it no longer matters what you or I do in anything, because the patterns of all outcomes are already out there between the stars. Instead, only mappings that are evident, i.e. prima facie or manifest, can be accepted. In fact, going back to Ben's example, a decipherment of Linear A will only be said to succeed when there is relatively little "stress" to such a mapping, i.e., the mapping becomes plain and patently manifest. Anyone who, on the other hand, puts forth a "decipherment" of Linear A that seems at all forced will find that no one will have any interest in it. Arbitrary mappings yield nothing and reveal nothing. Lee P.S. I second the motion that a medal should be struck in your honor, to applaud your perseverance with troublesome types like Gordon and me though thick or thin, without showing the slightest exasperation. :) From bbenzai at yahoo.com Tue Jan 12 09:10:24 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 12 Jan 2010 01:10:24 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <669329.67544.qm@web113606.mail.gq1.yahoo.com> Stathis Papaioannou wrote: > Of course, there > remains ... the possibility, > however unlikely, that the brain is not computable. I'm at a loss to understand this. On the face of it, it seems to be a claim that brains do not and cannot exist, but that can't be what you mean. Everything that exists has been 'computed' Everything is made of fundamental units that have been combined according to a set of rules. When we talk about making simulations we are just talking about moving this process to a different kind of fundamental unit, and discovering then applying the relevant set of rules. Thus we create models of things and processes, re-creating them on a different level of reality. If any aspect of a thing or process is not captured in the model, it means the model is not fine-grained enough, not extensive enough, or uses the wrong rules. All these things are fixable, at least in principle. So what does it mean to say that something is 'not computable', if not that it's impossible? Ben Zaiboc From stefano.vaj at gmail.com Tue Jan 12 10:39:56 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 11:39:56 +0100 Subject: [ExI] Avatar: misanthropy in three dimensions In-Reply-To: References: <201001091822.o09IMqs1023928@andromeda.ziaspace.com> <580930c21001100907q17708cc0u8dcc17b468eea5d8@mail.gmail.com> Message-ID: <580930c21001120239u4435de5cj101b3063c6ea683c@mail.gmail.com> 2010/1/11 ddraig : > 2010/1/11 Stefano Vaj : >> Neither have I, and I am quite impatient to get it on 3d blu-ray. > > You will watch it in 3d at home? How? No big deal. Some kind of 3d blu-ray "standard" is in the working, but several titles are already out, based on the usual tech of polarised glasses (which are included in the disc box). See, e.g., Coraline or Journey To The Center Of The Earth. At least for My Bloody Valentine 3d the disc even permitted to regulate the depth of the images with your remote control from the menu, as you do when you increase or decrease the audio volume... Sadly enough, I have never had any stereoscopic vision, so I miss all what the excitement is about... ;-) -- Stefano Vaj From stefano.vaj at gmail.com Tue Jan 12 11:45:48 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 12:45:48 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <669329.67544.qm@web113606.mail.gq1.yahoo.com> References: <669329.67544.qm@web113606.mail.gq1.yahoo.com> Message-ID: <580930c21001120345i1a9eaf2dmef8d7f334aaa39e8@mail.gmail.com> 2010/1/12 Ben Zaiboc : > I'm at a loss to understand this. ?On the face of it, it seems to be a claim that brains do not and cannot exist, but that can't be what you mean. > > Everything that exists has been 'computed' ?Everything is made of fundamental units that have been combined according to a set of rules. > > When we talk about making simulations we are just talking about moving this process to a different kind of fundamental unit, and discovering then applying the relevant set of rules. ?Thus we create models of things and processes, re-creating them on a different level of reality. OTOH, "computability" could be intended in the (Wolframian?) sense that there are no shortcuts. A given problem is uncomputable if it is not "reducible", so you have to run the process and see where it leads. This however tells us nothing about the fact that a functionally equivalent process can be implemented on a different platform. For instance, you can run a cellular automaton with pencil and paper or with a PC. -- Stefano Vaj From stefano.vaj at gmail.com Tue Jan 12 11:59:54 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 12:59:54 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <575831.58228.qm@web36504.mail.mud.yahoo.com> References: <298569.9122.qm@web113602.mail.gq1.yahoo.com> <575831.58228.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21001120359y9bc18c5oac02e87449cbb72d@mail.gmail.com> 2010/1/9 Gordon Swobe : > Human operators ascribe meanings to the symbols their computers manipulate. How would that be the case? Ascribing meaning to a symbol means to associate something to something else. As, in, e.g., ax^2 + bx + c = y when I ascribe to x the meaning of "5". What would be so special in what humans would be doing in this respect? -- Stefano Vaj From stathisp at gmail.com Tue Jan 12 12:07:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jan 2010 23:07:51 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <4B4C2BBF.3070307@rawbw.com> References: <591989.12968.qm@web113609.mail.gq1.yahoo.com> <4B4C2BBF.3070307@rawbw.com> Message-ID: 2010/1/12 Lee Corbin : > My favorite chapter of G?del, Escher, Bach is "The Location > of Meaning". Hofstadter points out, essentially, that there > are two kinds of meaning: conventional and isomorphic (I'm > not sure after all these years whether the terminology is > mine or his). > > You speak here of conventional meaning---meaning which > operates by convention. Our convention for "z-e-b-r-a" > is the large striped African mammal, though obviously > those letters could be assigned to something else. > > Isomorphic meaning, however, is not at all arbitrary. > The depth and jiggles in the grooves of a vinyl playing > record have, in some cases, an objective isomorphism > to the first movement of Beethovan's fifth symphony. > That's their undeniable meaning, no two ways about it. Vinyl records are interesting, because the relationship between the bumps in the grooves and the music they represent is not as straightforward as you might think. For reasons of sound quality, during the cutting of a record the high frequencies are boosted and the low frequencies attenuated, and during playback this must be undone by applying the exact inverse operation. This so-called RIAA equalisation is traditionally achieved by using a network of capacitors and resistors in the amplifier preamp stage. The interesting thing about this is that there is no way of figuring out what the equalisation curve is unless you are given that information. I'm not sure if equalisation was used for the Voyager golden record, but it would have made sense to record it with a flat frequency response, otherwise the aliens would only be able to hear a distorted version of what we sound like. But worse would have been sending a CD, compressed audio such as an MP3 file, or encrypted audio, in that order. That would have completely stumped the aliens, no matter how smart they were. This is because digital files have conventional meaning, not isomorphic meaning. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Jan 12 13:41:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 05:41:20 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: <580930c21001120359y9bc18c5oac02e87449cbb72d@mail.gmail.com> Message-ID: <240509.42222.qm@web36505.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stefano Vaj wrote: >> Human operators ascribe meanings to the symbols their >> computers manipulate. > > How would that be the case? Ascribing meaning to a symbol > means to associate something to something else. As, in, e.g., ax^2 + > bx + c = y when I ascribe to x the meaning of "5". What would be so > special in what humans would be doing in this respect? I consider my pocket calculator as an extension of my own mind. I use it as a tool. If I encounter a screw that I can't remove with my fingers, I grab my pocket screwdriver. If I encounter a math problem that I can't solve with my brain, I grab my pocket calculator. I don't believe either of these pocket tools of mine have understanding of anything whatsoever, but I might find it fun to pretend they do. -gts From stathisp at gmail.com Tue Jan 12 13:44:07 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 00:44:07 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <669329.67544.qm@web113606.mail.gq1.yahoo.com> References: <669329.67544.qm@web113606.mail.gq1.yahoo.com> Message-ID: 2010/1/12 Ben Zaiboc : > Stathis Papaioannou wrote: > >> Of course, there >> remains ... the possibility, >> however unlikely, that the brain is not computable. > > > I'm at a loss to understand this. ?On the face of it, it seems to be a claim that brains do not and cannot exist, but that can't be what you mean. > > Everything that exists has been 'computed' ?Everything is made of fundamental units that have been combined according to a set of rules. > > When we talk about making simulations we are just talking about moving this process to a different kind of fundamental unit, and discovering then applying the relevant set of rules. ?Thus we create models of things and processes, re-creating them on a different level of reality. > > If any aspect of a thing or process is not captured in the model, it means the model is not fine-grained enough, not extensive enough, or uses the wrong rules. ?All these things are fixable, at least in principle. > > So what does it mean to say that something is 'not computable', if not that it's impossible? Computable means computable by a Turing machine. Not all numbers and functions are computable, but it is not clear how this is relevant to physics. True randomness is not computable (except by a trick involving observers in branching virtual worlds) but there is no evidence that pseudo-randomness, which is computable, won't do as well. Real numbers are not computable, but even if it turns out that some physical parameters are continuous rather than discrete there is no reason to suppose that infinite precision arithmetic will be required to simulate the brain, since thermal motion effects would make precision beyond a certain number of decimal places useless. Finally, there may be new physics, such as a theory of quantum gravity, which is not computable. Roger Penrose thinks that this is the case, and that the brain utilises this exotic physics to do things that no Turing machine ever could, such as have certain mathematical insights. However, few believe that Penrose is right, and almost all agree that his main argument from Godel's theorem is wrong. On balance, it seems that the brain works using plain old fashioned chemistry, which no-one claims is not computable. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Jan 12 13:53:57 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 05:53:57 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <502327.74481.qm@web36508.mail.mud.yahoo.com> > My favorite chapter of G?del, Escher, Bach is "The > Location of Meaning". Hofstadter points out, essentially, that... At the end of the day, after all his intellectual musings and gyrations, Hofstadter still needs to demonstrate how he solves the symbol grounding problem without a mind in which symbols can find their grounding: Symbol Grounding http://en.wikipedia.org/wiki/Symbol_grounding -gts From stefano.vaj at gmail.com Tue Jan 12 13:56:19 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 14:56:19 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <240509.42222.qm@web36505.mail.mud.yahoo.com> References: <580930c21001120359y9bc18c5oac02e87449cbb72d@mail.gmail.com> <240509.42222.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21001120556l254fa233p4bf11c57fb083aa7@mail.gmail.com> 2010/1/12 Gordon Swobe : > --- On Tue, 1/12/10, Stefano Vaj wrote: > >>> Human operators ascribe meanings to the symbols their >>> computers manipulate. >> >> How would that be the case? Ascribing meaning to a symbol >> means to associate something to something else. As, in, e.g., ax^2 + >> bx + c = y when I ascribe to x the meaning of "5". What would be so >> special in what humans would be doing in this respect? > > I consider my pocket calculator as an extension of my own mind. I use it as a tool. If I encounter a screw that I can't remove with my fingers, I grab my pocket screwdriver. If I encounter a math problem that I can't solve with my brain, I grab my pocket calculator. But how would they associate symbols with values the way any universal computer is able to do? > I don't believe either of these pocket tools of mine have understanding of anything whatsoever, but I might find it fun to pretend they do. In any event, I assume one does not really have any idea of whether something has any understanding of anything, unless one has first a definition of what "understanding" would mean... And even if you had one, as long as the definition were making reference to some kind of "subjective consciousness" rather than to some phenomenon you would not know anyway. -- Stefano Vaj From stathisp at gmail.com Tue Jan 12 14:00:42 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 01:00:42 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <502327.74481.qm@web36508.mail.mud.yahoo.com> References: <502327.74481.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/1/13 Gordon Swobe : >> My favorite chapter of G?del, Escher, Bach is "The >> Location of Meaning". Hofstadter points out, essentially, that... > > At the end of the day, after all his intellectual musings and gyrations, Hofstadter still needs to demonstrate how he solves the symbol grounding problem without a mind in which symbols can find their grounding: > > Symbol Grounding > http://en.wikipedia.org/wiki/Symbol_grounding At the end of the next day you need to show how the mind solves the symbol grounding problem. I don't expect you to come up with the actual answer, just an indication of what an answer might look like would suffice. -- Stathis Papaioannou From jonkc at bellsouth.net Tue Jan 12 14:48:30 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 09:48:30 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <240509.42222.qm@web36505.mail.mud.yahoo.com> References: <240509.42222.qm@web36505.mail.mud.yahoo.com> Message-ID: <7ED69C97-1790-43EA-AED3-E26F5E6801B1@bellsouth.net> On Jan 12, 2010, Gordon Swobe wrote: > I don't believe either of these pocket tools of mine have understanding of anything whatsoever How can you tell? You think understanding is a completely useless property that doesn't change the behavior of something in the slightest. Why do you even care? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Jan 12 14:51:29 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 06:51:29 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: <580930c21001120556l254fa233p4bf11c57fb083aa7@mail.gmail.com> Message-ID: <12876.6100.qm@web36508.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stefano Vaj wrote: > In any event, I assume one does not really have any idea of > whether something has any understanding of anything, unless one has > first a definition of what "understanding" would mean... It seems then that you want to understand the meaning of "understanding". But that shows me that you already understand it. Someone here tried the other day to re-define understanding in such a way that brains do not really do this thing called "understanding" -- that they do something else instead that we only call understanding. I had trouble following his argument because it seemed to me that he wanted me to understand it, but I couldn't understand it according to the argument. :-) Sometimes you just have to hang your hat on things. I hang my hat on, among other things, the common sense notion that healthy people with developed brains can understand the meanings of words and symbols. It seems pretty obvious that we do it even if I can't tell you exactly how we do it. If I did not think so, and if you also did not think so, then we would not be communicating with symbols right now on ExI. -gts From gts_2000 at yahoo.com Tue Jan 12 15:18:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 07:18:45 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <7ED69C97-1790-43EA-AED3-E26F5E6801B1@bellsouth.net> Message-ID: <42347.9787.qm@web36503.mail.mud.yahoo.com> --- On Tue, 1/12/10, John Clark wrote: > You think understanding is a completely useless property that doesn't > change the behavior of something in the slightest. No, never said that. Humans have it, and if we want a s/h system to behave as if it has it then we must simulate it with syntactic rules in the software. That's what programmers do. -gts From stefano.vaj at gmail.com Tue Jan 12 15:36:40 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 12 Jan 2010 16:36:40 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <12876.6100.qm@web36508.mail.mud.yahoo.com> References: <580930c21001120556l254fa233p4bf11c57fb083aa7@mail.gmail.com> <12876.6100.qm@web36508.mail.mud.yahoo.com> Message-ID: <580930c21001120736m1c67096dl33c91d7a6ec43146@mail.gmail.com> 2010/1/12 Gordon Swobe : > Sometimes you just have to hang your hat on things. I hang my hat on, among other things, the common sense notion that healthy people with developed brains can understand the meanings of words and symbols. It seems pretty obvious that we do it even if I can't tell you exactly how we do it. If I did not think so, and if you also did not think so, then we would not be communicating with symbols right now on ExI. Common sense merely indicates to me that we are inclined to project "subjective" states on other things. By definition such projections are "not even wrong", not saying anything about phenomena (i.e., "objects") other than our own psychology. And they are routinely extended to many non-animal, or even non-organic, objects and systems. Moreover, even such projections are quite conditional. For instance, common sense does not tell me that perfectly developed brains of adult fruitflies have a better understanding, in whatever sense of the word you may choose to adopt, "of the meaning of words and symbols" than my PC (and in any event does not tells me that some "intelligent" features are exhibited by any developed brain which could not be replicated by *any* universal computer). -- Stefano Vaj From gts_2000 at yahoo.com Tue Jan 12 15:45:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 07:45:21 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <249656.87266.qm@web36501.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stathis Papaioannou wrote: > At the end of the next day you need to show how the mind solves the > symbol grounding problem. We'll know the complete answer someday. For now we need only know and agree that the mind has this cognitive capacity to understand, say, Chinese symbols. Now we ask whether implementing a formal program can give the mind that capacity to ground symbols. If it can then we should be able to do an experiment in which a mind obtains that cognitive capacity from mentally running a program. But as it turns out, we do not obtain that capacity from mentally running such a program. So whatever the mind does to get that cognitive capacity, it doesn't obtain it from running a formal program. Now we know more about the mind than we did before, even if we don't yet know the complete answer. -gts From jonkc at bellsouth.net Tue Jan 12 15:55:18 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 10:55:18 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <42347.9787.qm@web36503.mail.mud.yahoo.com> References: <42347.9787.qm@web36503.mail.mud.yahoo.com> Message-ID: <65BF7102-265A-44CF-BE80-23701CCB07B5@bellsouth.net> On Jan 12, 2010, Gordon Swobe wrote: >> You think understanding is a completely useless property that doesn't >> change the behavior of something in the slightest. > > No, never said that. Humans have it, and if we want a s/h system to behave as if it has it then we must simulate it You have said that "simulated understanding" is not understanding, so I repeat you think genuine understanding is a completely useless property, unless that is you wish to change your position. I certainly would if I were you. However if you do change you should be aware of the consequences; if genuine understanding is not useless then Evolution can produce it and the Turing Test can detect it, if genuine understanding is useless then the Turing Test can't detect it and neither can Evolution. Do you understand? Of course making a distinction between genuine understanding and simulated understanding is pretty silly, but that's another issue. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Jan 12 15:59:17 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 10:59:17 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <669329.67544.qm@web113606.mail.gq1.yahoo.com> Message-ID: <25BA8FEB-96E8-4A53-8DAF-AAAAC47EAFC4@bellsouth.net> On Jan 12, 2010, Stathis Papaioannou wrote: > Real numbers are not computable Some are, most aren't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Tue Jan 12 16:10:59 2010 From: max at maxmore.com (Max More) Date: Tue, 12 Jan 2010 10:10:59 -0600 Subject: [ExI] =?iso-8859-1?q?=91Strongest_Man=2C=92_104=2C_dies_after_he?= =?iso-8859-1?q?=27s_hit_by_car?= Message-ID: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> This is sad. Mr. Rollino looked amazingly good at 103. http://www.msnbc.msn.com/id/34818457/ns/us_news-life/ Max From jonkc at bellsouth.net Tue Jan 12 16:20:15 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 11:20:15 -0500 Subject: [ExI] Meaningless Symbols In-Reply-To: <249656.87266.qm@web36501.mail.mud.yahoo.com> References: <249656.87266.qm@web36501.mail.mud.yahoo.com> Message-ID: <7CBC881D-1E02-454C-BE0D-F8C0E57A27A7@bellsouth.net> On Jan 12, 2010, Gordon Swobe wrote: > we ask whether implementing a formal program can give the mind that capacity to ground symbols. If it can then we should be able to do an experiment in which a mind obtains that cognitive capacity from mentally running a program. How on Earth is this experiment supposed to work? There is no way you can know what its cognition is, all you can do is observe what it does, and you say that tells you nothing regardless of how brilliant and charming it behaves. > But as it turns out, we do not obtain that capacity from mentally running such a program. How on Earth do you know that? > We'll know the complete answer someday. There is not a snowball's chance in hell. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jan 12 17:45:49 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 04:45:49 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <249656.87266.qm@web36501.mail.mud.yahoo.com> References: <249656.87266.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/13 Gordon Swobe : > --- On Tue, 1/12/10, Stathis Papaioannou wrote: > >> At the end of the next day you need to show how the mind solves the >> symbol grounding problem. > > We'll know the complete answer someday. For now we need only know and agree that the mind has this cognitive capacity to understand, say, Chinese symbols. Now we ask whether implementing a formal program can give the mind that capacity to ground symbols. If it can then we should be able to do an experiment in which a mind obtains that cognitive capacity from mentally running a program. But as it turns out, we do not obtain that capacity from mentally running such a program. So whatever the mind does to get that cognitive capacity, it doesn't obtain it from running a formal program. > > Now we know more about the mind than we did before, even if we don't yet know the complete answer. It's not much of an answer. I was hoping you might say something like, understanding is due to a special chemical reaction in the brain, and since computers usually aren't chemical, they don't have it even if they can simulate its behaviour. In all that you and Searle have said, the strongest statement you can make is that a computer that is programmed to behave like a brain will not *necessarily* have the consciousness of the brain. You have not excluded the *possibility* that it might be conscious. You have no proof that, for example, understanding requires carbon atoms and is impossible without them. Nor have you any proof that arranging silicon and copper atoms in particular configurations that can be interpreted as implementing a formal program will *prevent* understanding that might have occurred had the arrangement been otherwise. In contrast, I have presented an argument which shows that it is *impossible* to separate understanding from behaviour. We have been talking about computerised neurons but the case can be made more generally. If God makes miraculous neurons that behave just like normal neurons but lack understanding, then these neurons could be used to selectively remove any aspect of consciousness such as perception, emotion and understanding. However, because the miraculous neurons behave normally in their interactions with the other neurons, the subject will behave normally and will not notice that anything has changed. He will lose visual perception but he will be not only able to describe everything he sees, he will also honestly believe that that he sees normally. He won't even comment that things are looking a little blurry around the edges, since the part of his brain responsible for noticing, reflecting on and verbalising will behave exactly the same as if the miraculous neurons had not been installed. Now surely if there is *anything* that can be said about visual perception, it is that a conscious, rational person will at least notice that something a bit unusual has happened if he suddenly goes completely blind; or that he has lost the power to understand speech, or the ability to feel pain. But with these miraculous neurons, any aspect of your consciousness could be arbitrarily removed and you would never know it. The conclusion is that in fact you would have normal consciousness with the miraculous neurons. In other words, they're not miraculous at all: not even God can make neurons that behave normally but lack consciousness. It's a logical impossibility, and God can at best only do the physically impossible, not the logically impossible. -- Stathis Papaioannou From spike66 at att.net Tue Jan 12 18:27:19 2010 From: spike66 at att.net (spike) Date: Tue, 12 Jan 2010 10:27:19 -0800 Subject: [ExI] roids in baseball Message-ID: <2B6901F67B3C4D7AB8A273A9A3894B81@spike> One of the US baseball stars yesterday confessed to having been using steroids during the period in which he broke a bunch of long standing records. Unfortunately the American proletariat totally missed the valuable signal among the noise. We treat it as a big cheating scandal instead of realizing that this is a perfect opportunity to test the value of various medications, a terrifically valuable laboratory in so many ways. We have preserved absurdly detailed records on the performance of so many players, from over a century, most of which are from before artificial steroids existed, so we have a great control group. We could use the information to help others achieve higher athletic performance, or help the aged combat the cruelty of the years. Instead we toss the valuable information into the wastecan of shame. The shame is on us. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Jan 12 19:51:22 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 12 Jan 2010 14:51:22 -0500 Subject: [ExI] roids in baseball In-Reply-To: <2B6901F67B3C4D7AB8A273A9A3894B81@spike> References: <2B6901F67B3C4D7AB8A273A9A3894B81@spike> Message-ID: 2010/1/12 spike : > ... We have > preserved absurdly detailed records on the performance of so many players, > from over a century, most of which are from before artificial steroids > existed, so we have a great control group.? We could use the information to > help others achieve higher athletic performance, or help the aged combat the > cruelty of the years. Given the changes that occurred to equipment and rules over the years, I'm not sure how much you can determine by comparing stats over a hundred year span. Baseball fans can argue over Babe Ruth vs. Mark Maguire to the point that it would make our "artificial consciousness" discussion seem terse. -Dave From jonkc at bellsouth.net Tue Jan 12 20:35:44 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 15:35:44 -0500 Subject: [ExI] Meaningless Symbols In-Reply-To: <12876.6100.qm@web36508.mail.mud.yahoo.com> References: <12876.6100.qm@web36508.mail.mud.yahoo.com> Message-ID: <9A7F80C0-F652-4A85-82B9-E93B67C0919A@bellsouth.net> On Jan 12, 2010, Gordon Swobe wrote: > > It seems then that you want to understand the meaning of "understanding". But that shows me that you already understand it. I understand the definition of understanding, but I don't understand the definition of definition. > > Someone here tried the other day to re-define understanding in such a way that brains do not really do this thing called "understanding" -- that they do something else instead that we only call understanding. Homer didn't write the Iliad and the Odyssey, they were written by another blind poet from Smyrna in 850BC who just happened to have the same name. > Sometimes you just have to hang your hat on things. I hang my hat on, among other things, the common sense notion that healthy people with developed brains can understand the meanings of words and symbols. Because they act as if they do. > It seems pretty obvious that we do it even if I can't tell you exactly how we do it. Because they act as if they do. > If I did not think so[...] Then you'd think you were the only conscious being in the universe, and nobody can live like that. John K Clark > , and if you also did not think so, then we would not be communicating with symbols right now on ExI. > > -gts > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Jan 12 21:21:16 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 16:21:16 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <591989.12968.qm@web113609.mail.gq1.yahoo.com> <4B4C2BBF.3070307@rawbw.com> Message-ID: <8297DA02-72F3-4062-B67B-074463DCDD4F@bellsouth.net> On Jan 12, 2010, Stathis Papaioannou wrote: > worse would have been sending a CD, compressed audio such as an MP3 file, or encrypted > audio, in that order. That would have completely stumped the aliens, no matter how smart they were. I wonder if that is true. Perhaps for MP3 files, because at least to some extent it was designed to accommodate the idiosyncrasies and limitations of the human auditory system, but if the music was converted into a Zip file our alien friends might have a chance. If Zip compression were perfect the file would seem completely random and even a Jupiter Brain would be sunk, but Zip is not perfect, redundancy remains, so they might figure out it's a compressed file and guess it represents a vibration. Alien or no I'll bet they're familiar with vibration. As for decoding encrypted stuff, that depends on if Quantum Computers can really be made to work. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Tue Jan 12 23:38:05 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 12 Jan 2010 16:38:05 -0700 Subject: [ExI] I'm no fool In-Reply-To: <201001080843.o088hLc2019286@andromeda.ziaspace.com> References: <201001080843.o088hLc2019286@andromeda.ziaspace.com> Message-ID: <2d6187671001121538v51707e80w1db326eb708dabab@mail.gmail.com> Max More wrote: > With the current discussion about psi, and our continuing interest in > rational thinking... Recently, I heard a line in a South Park episode that I > found extremely funny and really quite deep, paradoxical, and illuminating: > > "I wasn't born again yesterday" > > (This was in South Park, season 7, "Christian Rock Hard" Max, you need to get busy writing those pop culture & philosophy books! I know you can do better than what is already out there. Or you could start by writing "Cryonics for Dummies," to be followed up by "Transhumanism for Dummies..." John : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Wed Jan 13 01:46:47 2010 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 13 Jan 2010 12:16:47 +1030 Subject: [ExI] Raymond Tallis: You won't find consciousness in the brain In-Reply-To: <4B4A2384.7030700@satx.rr.com> References: <4B4A2384.7030700@satx.rr.com> Message-ID: <710b78fc1001121746y360ea3afw20235a8f301f71f7@mail.gmail.com> I've responded to this below. Summary: I don't buy it. Also, just for fun, I've put my description of what I think subjective conscious experience is and does at the bottom of this email, and am hoping for feedback. 2010/1/11 Damien Broderick : > New Scientist: You won't find consciousness in the brain > > > > 7 January 2010 by Ray Tallis > > [Raymond Tallis wrote a wonderful deconstruction of deconstruction and > poststructuralism, NOT SAUSSURE] > > MOST neuroscientists, philosophers of the mind and science > journalists feel the time is near when we will be able to explain > the mystery of human consciousness in terms of the activity of the > brain. There is, however, a vocal minority of neurosceptics who > contest this orthodoxy. Among them are those who focus on claims > neuroscience makes about the preciseness of correlations between > indirectly observed neural activity and different mental functions, > states or experiences. > > This was well captured in a 2009 article in Perspectives on > Psychological Science by Harold Pashler from the University of > California, San Diego, and colleagues, that argued: "...these > correlations are higher than should be expected given the (evidently > limited) reliability of both fMRI and personality measures. The high > correlations are all the more puzzling because method sections > rarely contain much detail about how the correlations were > obtained." > > Believers will counter that this is irrelevant: as our means of > capturing and analysing neural activity become more powerful, so we > will be able to make more precise correlations between the quantity, > pattern and location of neural activity and aspects of > consciousness. > > This may well happen, but my argument is not about technical, > probably temporary, limitations. It is about the deep philosophical > confusion embedded in the assumption that if you can correlate > neural activity with consciousness, then you have demonstrated they > are one and the same thing, and that a physical science such as > neurophysiology is able to show what consciousness truly is. I don't think there really is such a confusion. I'm pretty sure that the people studying the structure of the brain, looking for correlates to consciousness, know about this; we are all subjectively conscious beings, after all. It's just that you have to start somewhere; the approach is to keep finding mechanism, keep narrowing things down, and hope that along the way better information and better understanding will yield insight on how to find subjective consciousness itself. Given that currently, regarding subjective first person conscious experience, we can barely even frame the questions we want to ask, digging in hard into areas that we can make sense of is a great approach, particularly given than the one and the other must be massively interrelated. > Many neurosceptics have argued that neural activity is nothing like > experience, and that the least one might expect if A and B are the > same is that they be indistinguishable from each other. Countering > that objection by claiming that, say, activity in the occipital > cortex and the sensation of light are two aspects of the same thing > does not hold up because the existence of "aspects" depends on the > prior existence of consciousness and cannot be used to explain the > relationship between neural activity and consciousness. Ok, this immediately stops making much sense. Tell me if this is what he is saying: the sensation of light, and activity in the occipital cortex are different things, but we might say the activity in the cortex represents the light. But this representation only makes sense in the context of something which can understand the representation, which is consciousness, which puts the cart before the horse? > This disposes of the famous claim by John Searle, Slusser Professor > of Philosophy at the University of California, Berkeley: that neural > activity and conscious experience stand in the same relationship as > molecules of H[2]O to water, with its properties of wetness, > coldness, shininess and so on. Is he talking here about mind being an epiphenomenon of the brain? Or is it something more mundane; water is made of H20 molecules, which in aggregate have these middleworld properties as described. > The analogy fails as the level at > which water can be seen as molecules, on the one hand, and as wet, > shiny, cold stuff on the other, are intended to correspond to > different "levels" at which we are conscious of it. Wait, does this make sense? Wasn't the preceding sentence using water as an analogy, not talking about how we were conscious of it? > But the > existence of levels of experience or of description presupposes > consciousness. Water does not intrinsically have these levels. This is surely playing fast and loose with language. At best I can understand this as saying that without conscious experience, the world is just a dance of atoms. There is nothing "wet" because you need a mind to experience "wet". Yet wetness is also operational; it is a loose high level description of how water (groups of H20 molecules) will interact with other substances (it might infuse porous ones, for instance), and there is no need for the conscious observer to be present, in theory, for that to still happen. It's a disingenuous bit of wordplay though, no? At many scales, simple things can group together and exhibit higher level group behaviours that aren't necessarily obvious from the basics of the elements, and aren't like the elements. One H20 molecule really has nothing about it of wetness or shinyness or coldness; no one would describe one molecule as wet. In aggregate, however, the grouped substance does. > We cannot therefore conclude that when we see what seem to be neural > correlates of consciousness that we are seeing consciousness itself. Sure, that's why they're called correlates. > While neural activity of a certain kind is a necessary condition for > every manifestation of consciousness, from the lightest sensation to > the most exquisitely constructed sense of self, it is neither a > sufficient condition of it, nor, still less, is it identical with > it. For the activity of individual neurons, I'll accept this. But for the whole system of neurons, it's not at all clear. The wordplay about water above doesn't in any way tell us about it. Groups of things really have properties that the individuals do not, in that they have higher level behaviours which aren't similar to the behaviour of their elements. An H2O molecule is not wet, but water is. Neurons are very unlikely to have subjective conscious (and their molecules and atoms even less so), but that doesn't tell us whether the system of neurons is. We *don't know* what subjective consciousness is, so we can't say. It is probably safe to say that the neural system is necessary for it, but suffient and or equivalent? It could be, it might not be. Occam's razor says to me that it is more likely that the neural system is sufficient for consciousness, because otherwise we are looking for some other mechanism, and there's no evidence of any. But anyway, his argument that a group of things can't have different properties to that of the individual things, it's wrong. > If it were identical, then we would be left with the insuperable > problem of explaining how intracranial nerve impulses, which are > material events, could "reach out" to extracranial objects in order > to be "of" or "about" them. wtf? Where is there reaching out? There is no necessity for anything to magically breach the skull. We get input, it goes into the neural system, it gets processed, it and processed versions of it get stored as memories and as modifications to our mental processes. There is a representation inside the brain. The mechanism of the brain can work (must work!) entirely in terms of these representations. That we then have subjective experience of some piece of this working, including feelings about the things represented (what are qualia but feeling about representations), is mysterious, but we have no reason to suppose it is anything other than a higher level property of our neural hardware (otherwise, what is it?). The idea of "reaching out" is ridiculous. If we were directly experiencing the outside world somehow, rather than experiencing reconstituted feelings about representations of things, our mind would have all the weird failures it has; we wouldn't be able to have experiences of the world different to other people's experiences. > Straightforward physical causation > explains how light from an object brings about events in the > occipital cortex. No such explanation is available as to how those > neural events are "about" the physical object. Biophysical science > explains how the light gets in but not how the gaze looks out. The gaze - is this a reference to Foucault? (reaches for his cudgel) No gaze looks out. If we ignore first person subjective experience for a moment, everything else about the brain makes sense in terms of information processing. A robot can be self aware, in that it would have in its memory a collection of representations of things in the world, one of which is itself. Its processing would include some places where it was primary, and others where it was just one more thing in the field of things. A sophisticated enough program should be able to do all the things that we do, even come up with the same kinds of thoughts and ideas; if we accept that all the mechanism of the mind is in the brain, then it must be in principle computable, we just don't know how to do all that stuff yet. But it doesn't follow that this robot would be subjectively conscious like we are. If this subjective consciousness is the "gaze", then surely it doesn't look out, but merely looks apon the internal representation of what's out there. > Many features of ordinary consciousness also resist neurological > explanation. Take the unity of consciousness. I can relate things I > experience at a given time (the pressure of the seat on my bottom, > the sound of traffic, my thoughts) to one another as elements of a > single moment. Researchers have attempted to explain this unity, > invoking quantum coherence (the cytoskeletal micro-tubules of Stuart > Hameroff at the University of Arizona, and Roger Penrose at the > University of Oxford), electromagnetic fields (Johnjoe McFadden, > University of Surrey), or rhythmic discharges in the brain (the late > Francis Crick). > > These fail because they assume that an objective unity or uniformity > of nerve impulses would be subjectively available, which, of course, > it won't be. Even less would this explain the unification of > entities that are, at the same time, experienced as distinct. My > sensory field is a many-layered whole that also maintains its > multiplicity. There is nothing in the convergence or coherence of > neural pathways that gives us this "merging without mushing", this > ability to see things as both whole and separate. Does this make any sense to anyone? If you think of the brain as at least in part an information processing organ, then it will have representations of its inputs and itself at many different levels simultaneously (the colours brown and green and also bark and also the tree and also the forest), grouped in useful ways, including temporal grouping. That he can relate the feeling of his arse to his thoughts is in no doubt, but how does this relate to some "unity of consciousness"? Why invoke special magic for something so mundane? > And there is an insuperable problem with a sense of past and future. > Take memory. It is typically seen as being "stored" as the effects > of experience which leave enduring changes in, for example, the > properties of synapses and consequently in circuitry in the nervous > system. Absolutely. > But when I "remember", I explicitly reach out of the present > to something that is explicitly past. wtf??? How? With magic powers? Is this guy insisting that we have direct experience of the physical world, including the physical world of the past?? All that is required here is that you have an internal representation of the past, tucked away in your brain somewhere. > A synapse, being a physical > structure, does not have anything other than its present state. Yes, just as computer memory only has its present state, there are no time machines. > It does not, as you and I do, reach temporally upstream from the > effects of experience to the experience that brought about the > effects. Fuck a duck! All this requires is representation of the past. If you accept that we have subjective conscious awareness of some part of the processing of our minds, and that we can't explain that, there is no reason to invoke extra unknowns to describe remembering the past. Clearly, we have a representation of the past encoded in our brains, which we use to reconstitute the past. We have encodings of what happened in the past, including representations (not too sophisticated, one might add) of how we felt. It is clear to me that as we recall the past in this way, as we imagine it, we then reconstitute new, current feelings (qualia) relating to it, as if it were happening now. The best evidence that this is the case, and that we don't *actually reach back into the past*, is that we get it wrong a lot, mostly wrong actually, if you are to believe the science. Our memories (our representations of the past) are incomplete, and we fill in the blanks when we load them back up with plausible stuff. Sometimes we fabricate memory entirely. If you to "explicitly reach out of the present to something that is explicitly past", into the real past, surely all our recollections would be perfect and in perfect agreement? > In other words, the sense of the past cannot exist in a > physical system. Information systems do this with boring consistency. They store records of what happened in the past, who did what, what pieces of paper were seen by whom, etc. Your email client has a record of its past. A diary is a record of the past. These are (parts of) a physical system. > This is consistent with the fact that the physics > of time does not allow for tenses: Einstein called the distinction > between past, present and future a "stubbornly persistent illusion". What? why is this relevant? > There are also problems with notions of the self, with the > initiation of action, and with free will. Some neurophilosophers > deal with these by denying their existence, but an account of > consciousness that cannot find a basis for voluntary activity or the > sense of self should conclude not that these things are unreal but > that neuroscience provides at the very least an incomplete > explanation of consciousness. The basis for voluntary activity is straightforward; a bit of your brain is responsible for taking in a lot of input, including recent sensory information, memory, decisions and hints from other bits of the brain, and deciding on a course of action. In that it decides, based on whatever algorithms it uses, it is voluntary. That we have the sense of self, of subjective consciousness, no one will dispute this is mysterious. That we feel like we make decisions freely, rather than as the result of an algorithm is not at all mysterious; we feel all kinds of misleading things. Our brains are weird as hell, and mostly you shouldn't trust your brain too far; I certainly wouldn't turn my back on mine. The big mystery to my mind is that we have subjective consciousness at all. It doesn't seem to do anything useful, that you couldn't do without it. And yet it certainly has a function, has physical presence, because we can talk about it, think about it. It can't be off in some other distinct non-physical realm, because it can affect our brains. I guess a delusion could also do that, but if its a delusion, it's one shared by us all, and hardly counts as such. > I believe there is a fundamental, but not obvious, reason why that > explanation will always remain incomplete - or unrealisable. This > concerns the disjunction between the objects of science and the > contents of consciousness. Science begins when we escape our > subjective, first-person experiences into objective measurement, and > reach towards a vantage point the philosopher Thomas Nagel called > "the view from nowhere". You think the table over there is large, I > may think it is small. We measure it and find that it is 0.66 metres > square. We now characterise the table in a way that is less beholden > to personal experience. > > Thus measurement takes us further from experience and the phenomena > of subjective consciousness to a realm where things are described in > abstract but quantitative terms. To do its work, physical science > has to discard "secondary qualities", such as colour, warmth or > cold, taste - in short, the basic contents of consciousness. For the > physicist then, light is not in itself bright or colourful, it is a > mixture of vibrations in an electromagnetic field of different > frequencies. The material world, far from being the noisy, > colourful, smelly place we live in, is colourless, silent, full of > odourless molecules, atoms, particles, whose nature and behaviour is > best described mathematically. In short, physical science is about > the marginalisation, or even the disappearance, of phenomenal > appearance/qualia, the redness of red wine or the smell of a smelly > dog. Yes > Consciousness, on the other hand, is all about phenomenal > appearances/qualia. As science moves from appearances/qualia and > toward quantities that do not themselves have the kinds of > manifestation that make up our experiences, an account of > consciousness in terms of nerve impulses must be a contradiction in > terms. There is nothing in physical science that can explain why a > physical object such as a brain should ascribe appearances/qualia to > material objects that do not intrinsically have them. > > Material objects require consciousness in order to "appear". Then > their "appearings" will depend on the viewpoint of the conscious > observer. This must not be taken to imply that there are no > constraints on the appearance of objects once they are objects of > consciousness. > > Our failure to explain consciousness in terms of neural activity > inside the brain inside the skull is not due to technical > limitations which can be overcome. It is due to the > self-contradictory nature of the task, of which the failure to > explain "aboutness", the unity and multiplicity of our awareness, > the explicit presence of the past, the initiation of actions, the > construction of self are just symptoms. We cannot explain > "appearings" using an objective approach that has set aside > appearings as unreal and which seeks a reality in mass/energy that > neither appears in itself nor has the means to make other items > appear. The brain, seen as a physical object, no more has a world of > things appearing to it than does any other physical object. > The brain is an information processing and control system powerhouse. It also has this associated subjective consciousness, which appears related to / to have access to only a very small part of the brain, given how unaware we are of our own internal workings. The way the author talks about subjective consciousness, he makes it sound like an indivisible whole, atomic. Yet our brains & minds are clearly anything but. The very fact that we are so ignorant of how our mind works shows that the parts which correlate directly with consciousness have direct access to very little of the rest of the brain. I think I've said enough about why I think this guy is wrong. How about I go out on a limb and say what I think about subjective consciousness? I can't say how it works, but I have some ideas on why it exists and what it's for. It seems to me that subjective consciousness is simply a module of the mind, which is for something very specific, and that is to feel things. Qualia like the "redness of red" and emotions like anger share the property of being felt; they are the same kind of thing. It's clear to me at least that this is a functional module, in that it takes information from other parts of the brain as input (for example, the currently imagined representation of the world, whether that is current or a reloaded past), produces feelings (how? No idea), then outputs that back to the other parts of the brain, affecting them in appropriate ways. The other parts of the brain do everything else; they create all our "ideas" (and then we get to feel that "aha" moment"), they make all our decisions (to which are added some feelings of volition), they do all the work. The feelings produced/existing in the subjective consciousness module are like side effects of all that, but they go back in a feedback loop to influence the future operation of the other parts. Why would you have something like this? What can this do that a non-subjectively conscious module couldn't? Why not just represent emotions (with descriptive tags, numerical levels, canned processing specific to each one), why actually *feel* them? To me that's as big a question as how. I can't explain that. What's interesting though is how the purpose of the mechanism of feeling seems to be to guide all the other areas, to steer them. eg: some bits of the brain determine that we are in a fight-or-flight situation. They decide "flight". They inform the feeling module (subjective consciousness) that we need to feel fear. The feeling module does that ("Fear!"), and informs appropriate other parts of the brain to modify their processing in terms appropriate to fear (affecting decision making, tagging our memories with "I was scared here", even affecting our raw input processing). So we feel scared and do scared things. Probably most importantly, we can break "not enough information" deadlocks in decision making with "well what would the fearful choice be" - that's motivation right there. It's a blunt instrument, which might be useful if you didn't have much else in terms of executive processes. It is really weird in our brains though, because we do, we have fantastic higher level processing that can do all kinds of abstract reasoning and complex planning and sophisticated decision making. Why do we also need the bludgeons of emotions like anger, restlessness, boredom, happiness? So we have roughly two systems doing similar things in very different ways, which you'd expect to fight. And thus the human condition :-) But where it would not be weird is in a creature without all this higher level processing stuff. Never mind how evolution came up with it in the first place (evolution is amazing that way) but given that it did, it would be a great platform for steering, motivating, guiding an unintelligent being. So what I'm getting at is, it's a relic from our deep evolutionary past. It's not higher cognitive functioning at all. Probably most creatures are subjectively conscious. They don't have language, they might not have much concept of the past or future, but they feel, just as we do (if in a less sophisticated way). They really have pleasure and pain and the redness of red. And suffering. We have a conceit that we (our subjectively conscious selves) are *really* our higher order cognitive processes, but I think that's wrong. We take pride in our ideas, especially the ones that come out of nowhere, but that should be a clue. They come out of "nowhere" and are simply revealed to the conscious us. "Nowhere" is the modern information processing bits of the brain, the neocortex, which does the heavily lifting and informs us of the result without the working. We claim our own decisions, but neuroscience, as well as simple old psychology, keeps showing us that decisions are made before we are aware of them, and that we simply rationalize volition where it doesn't exist. How do we make decisions? Rarely in a step-by-step derivational, rational way. More often they are "revealed" to us, they're "gut instinct". They come from some other part of the brain which simply informs "us" of the result. We think of the stream of internal dialogue, the voice in the mind, as truly "us", but where do all those thoughts come from? You can't derive them. It's like we are reading a tape with words on it, which comes from somewhere else; it's being sent in by another part of the brain that we don't have access to, again. We read them, the subjective-consciousness module adds feelings of ownership to them, and decorates them with emotional content, and the result feeds back out to the inaccessible parts of the brain, to influence the next round of thoughts on the tape. In short, I think that the vast majority of the brain is stuff that our "subjective" self can't access except indirectly through inputs and outputs. Most of the things that make us smart humans are actually out in this area, and are plain old information processing stuff, you could replace them with a chip, and as long as the interfaces were the same, you'd never know. I think the treasured conscious self is less like an AGI than like a tiny primitive animal, designed for fighting and fucking and fleeing and all that good stuff, which evolution has rudely uplifted by cobbling together a super brain and stapling it to the poor creature. I hope I'm right. If this is actually how we work, then the prospect of seriously hacking our brains is very good. You should be able to replace existing higher level modules with synthetic equivalents (or upgrades). You should be able to add new stuff, as long as it obeys the API (eg: add thoughts to the thought tape? take emotional input and modify accordingly?) Also, as to correlates of subjectively conscious experience in the mind, we should be looking for something that exists everywhere, not just in us. That might narrow it down a bit ;-) -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From max at maxmore.com Wed Jan 13 02:00:52 2010 From: max at maxmore.com (Max More) Date: Tue, 12 Jan 2010 20:00:52 -0600 Subject: [ExI] Max and Natasha live on China radio in 2 mins Message-ID: <201001130201.o0D215r5001157@andromeda.ziaspace.com> http://www.am880.net/today.asp From gts_2000 at yahoo.com Wed Jan 13 02:23:50 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 12 Jan 2010 18:23:50 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <331623.31673.qm@web36501.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stathis Papaioannou wrote: >> Now we know more about the mind than we did before, > even if we don't yet know the complete answer. > > It's not much of an answer. I was hoping you might say > something like,understanding is due to a special chemical reaction in > the brain... Well, yes, clearly neurons and neurochemistry and other biological factors in the brain enable our understanding of symbols. Sorry I can't tell you exactly how the science works; neuroscience still has much work to do. But this conclusion seems inescapable. To deny it one must leave the sane world of philosophical monism and enter into the not-so-sane world of dualism in which mental phenomena exist in some ephemeral netherworld, or into the similarly not-so-sane world of idealism in which matter does not even exist. But of course I'm making some value judgments here; dualists and idealists have rights to express their opinions too. > In all that you and Searle have said, the strongest > statement you can make is that a computer that is programmed to > behave like a brain will not *necessarily* have the consciousness of > the brain. I can say this with extremely high confidence: semantics does not come from syntax, and software/hardware systems as they exist today merely run syntactical programs. For this reason s/h systems of today cannot have semantics, i.e., they cannot overcome the symbol grounding problem. Many philosophers have offered rebuttals to Searle's argument, but none of the reputable rebuttals deny the basic truth that the man in the room cannot understand symbols from manipulating them according to rules of syntax. It just can't happen. (And the truth is that it's even worse than it seems: not only does the semantics come from the human operators, but so too does the syntax. This means that even if computers could get semantics from syntax, we would not be able to say that computers derive semantics independent of their human operators. But that's another story...) > In contrast, I have presented an argument which shows that > it is *impossible* to separate understanding from behaviour. You and I both know that philosophical zombies do not defy any rules of logic. So I don't know what you mean by "impossible". In fact to my way of thinking your experiments do exactly that: they create semi-robots that act like they have intentionality but don't, or which have compromised intentionality. They create weak AI. More in the morning if I get a minute. -gts From msd001 at gmail.com Wed Jan 13 04:17:25 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 12 Jan 2010 23:17:25 -0500 Subject: [ExI] Ecosia - is this BS? In-Reply-To: <710b78fc1001112107k44441477xcd10662430825e81@mail.gmail.com> References: <710b78fc1001112107k44441477xcd10662430825e81@mail.gmail.com> Message-ID: <62c14241001122017u68b0b181i8d5180e406575aa2@mail.gmail.com> On Tue, Jan 12, 2010 at 12:07 AM, Emlyn wrote: > My bogometer is in the red. Please read and critique. > > http://www.businessgreen.com/business-green/news/2254326/bing-backs-world-greenest > http://ecosia.org/ > > Should we translate this as "Microsoft greenwashes Bing, hapless WWF > lends support"? "...we could save a rainforest area as big as Switzerland each year." To indicate how far that analogy misses the mark, my only thought upon reading it was, "There are no rainforests in Switzerland" Might as well buy-into the black pixel project for 'saving energy' http://www.treehugger.com/files/2009/06/black-pixel-is-it-possible-to-save-energy-one-pixel-at-a-time.php I think we'd save more energy (and reduce carbon footprint, etc.) if we gave corporations a small tax break for every employee that provably works from home to avoid the commute. They have little incentive to "allow" their employees to escape the cube-farm and remain at home. So instead, I drive 26 miles each direction to sit in front of a computer that I could have accessed remotely (or used VPN, etc.) Stupidly wasteful. :( From stathisp at gmail.com Wed Jan 13 04:35:58 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 15:35:58 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <331623.31673.qm@web36501.mail.mud.yahoo.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/13 Gordon Swobe : >> In all that you and Searle have said, the strongest >> statement you can make is that a computer that is programmed to >> behave ?like a brain will not *necessarily* have the consciousness of >> the brain. > > I can say this with extremely high confidence: semantics does not come from syntax, and software/hardware systems as they exist today merely run syntactical programs. For this reason s/h systems of today cannot have semantics, i.e., they cannot overcome the symbol grounding problem. I don't accept that semantics does not come from syntax because I don't see where else, logically, semantics could come from. However, if I accept it for the sake of argument, you have agreed in the past that running a program incidentally will not destroy semantics. So it is possible for you to consistently to hold that semantics does not come from syntax *and* that computers can have semantics, due to their substance or their processes, just as in the case of the brain. > Many philosophers have offered rebuttals to Searle's argument, but none of the reputable rebuttals deny the basic truth that the man in the room cannot understand symbols from manipulating them according to rules of syntax. It just can't happen. Yes, but the man in the room has an advantage over the neurons in the brain, because he at least understands that he is doing some sort of weird task, while the neurons understand nothing at all. You would have to conclude that if the CR does not understand Chinese, then a Chinese speaker's brain understands it even less. >> In contrast, I have presented an argument which shows that >> it is *impossible* to separate understanding from behaviour. > > You and I both know that philosophical zombies do not defy any rules of logic. So I don't know what you mean by "impossible". In fact to my way of thinking your experiments do exactly that: they create semi-robots that act like they have intentionality but don't, or which have compromised intentionality. They create weak AI. I think it is logically impossible to create weak AI neurons. If weak AI neurons were possible, then it would be possible to arbitrarily remove any aspect of your consciousness leaving you not only behaving as if nothing had changed but also unaware that anything had changed. This would seem to go against any coherent notion of consciousness: however mysterious and ineffable it may be, you would at least expect that if your consciousness changed, for example if you suddenly went blind or aphasic, that you would notice something a bit out of the ordinary had happened. If you think that imperceptible radical change in consciousness is not self-contradictory, then I suppose weak AI neurons are logically possible. But you would then have the problem of explaining how you know now that you have not gone blind or aphasic without realising it, and why you should care if you had such an affliction. -- Stathis Papaioannou From jonkc at bellsouth.net Wed Jan 13 04:57:47 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 12 Jan 2010 23:57:47 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <331623.31673.qm@web36501.mail.mud.yahoo.com> Message-ID: <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> > Many philosophers have offered rebuttals to Searle's argument, but none of the reputable rebuttals deny the basic truth that the man in the room cannot understand symbols And no reputable philosopher can deny that the man is not important, the room is. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jan 13 05:08:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jan 2010 16:08:47 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> Message-ID: 2010/1/13 John Clark : > Many philosophers have offered rebuttals to Searle's argument, but none of > the reputable rebuttals deny the basic truth that the man in the room cannot > understand symbols > > And no reputable philosopher can deny that the man is not important, the > room is. Searle's response is for the man to internalise the cards and rules so that the room is eliminated. He then says that the man is the whole system and still doesn't understand Chinese, therefore the system doesn't understand Chinese. But that just means that Searle does not understand the concept of a system. -- Stathis Papaioannou From jonkc at bellsouth.net Wed Jan 13 05:19:42 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 13 Jan 2010 00:19:42 -0500 Subject: [ExI] =?iso-8859-1?q?=91Strongest_Man=2C=92_104=2C_dies_after_he?= =?iso-8859-1?q?=27s_hit_by_car?= In-Reply-To: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> References: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> Message-ID: <852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net> On Jan 12, 2010, Max More wrote: > This is sad. Mr. Rollino looked amazingly good at 103. That is sad, still, if any death could be called a good death then that was it, as was the death of my great grandmother who died at the age of 98 when she was hit by a train (she was as deaf as a post by then but her mind was sharp) when she crossed the railroad tracks on her way to visit some friends at a old folks home. It made for a bit of a mess but that wasn't her problem, she didn't have to clean it up. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Jan 13 05:44:01 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 13 Jan 2010 00:44:01 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> Message-ID: <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> On Jan 13, 2010, Stathis Papaioannou wrote: > Searle's response is for the man to internalise the cards and rules so > that the room is eliminated. He then says that the man is the whole > system I know, and after the minnow devours the whale Searle thinks it's still reasonable to talk about "the minnow". I don't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 13 05:58:09 2010 From: spike66 at att.net (spike) Date: Tue, 12 Jan 2010 21:58:09 -0800 Subject: [ExI] 'Strongest Man,' 104, dies after he's hit by car In-Reply-To: <852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net> References: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> <852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net> Message-ID: <3EA74E5F2AD346C689D830340790E256@spike> John I have read some weird comments on ExI-chat. Understatement, I have *written* some weird comments on Exi-chat. But fer cryin out loud man, what, if anything, in the hellll were you thinking when you wrote this? spike _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Sent: Tuesday, January 12, 2010 9:20 PM To: ExI chat list Subject: Re: [ExI] 'Strongest Man,' 104, dies after he's hit by car On Jan 12, 2010, Max More wrote: This is sad. Mr. Rollino looked amazingly good at 103. That is sad, still, if any death could be called a good death then that was it, as was the death of my great grandmother who died at the age of 98 when she was hit by a train (she was as deaf as a post by then but her mind was sharp) when she crossed the railroad tracks on her way to visit some friends at a old folks home. It made for a bit of a mess but that wasn't her problem, she didn't have to clean it up. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Jan 13 06:53:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 13 Jan 2010 01:53:49 -0500 Subject: [ExI] 'Strongest Man,' 104, dies after he's hit by car In-Reply-To: <3EA74E5F2AD346C689D830340790E256@spike> References: <201001121611.o0CGB94L018181@andromeda.ziaspace.com> <852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net> <3EA74E5F2AD346C689D830340790E256@spike> Message-ID: <4EA6E89A-249D-4BEF-8B00-E6B50F2865DF@bellsouth.net> On Jan 13, 2010, spike wrote: > what if anything, in the hellll were you thinking when you wrote this? What's the problem? Like it or not billions of human beings have experienced death, trillions if you don't get too picky on defining what a human being is, and some of those deaths were better than others. Of course if I were God I would make death physically impossible, and excruciating pain even more impossible. I applied for the job and I just don't understand why I didn't get it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 13 07:01:24 2010 From: spike66 at att.net (spike) Date: Tue, 12 Jan 2010 23:01:24 -0800 Subject: [ExI] 'Strongest Man,' 104, dies after he's hit by car In-Reply-To: <4EA6E89A-249D-4BEF-8B00-E6B50F2865DF@bellsouth.net> References: <201001121611.o0CGB94L018181@andromeda.ziaspace.com><852EA80A-36EF-469A-B056-164B125EB162@bellsouth.net><3EA74E5F2AD346C689D830340790E256@spike> <4EA6E89A-249D-4BEF-8B00-E6B50F2865DF@bellsouth.net> Message-ID: ...On Behalf Of John Clark On Jan 13, 2010, spike wrote: what if anything, in the hellll were you thinking when you wrote this? What's the problem? ...I applied for the job and I just don't understand why I didn't get it. John K Clark Ja I just failed to see the humor. And I am one of those cats who seldom fails to see the humor. But I will get over it. Sorry to hear of your family's loss. spike From gts_2000 at yahoo.com Wed Jan 13 12:44:50 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 04:44:50 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <199136.9587.qm@web36503.mail.mud.yahoo.com> --- On Tue, 1/12/10, Stathis Papaioannou wrote: > I don't accept that semantics does not come from syntax > because I don't see where else, logically, semantics could come from. > However, if I accept it for the sake of argument, you have agreed in > the past that running a program incidentally will not destroy > semantics. So it is possible for you to consistently to hold that > semantics does not come from syntax *and* that computers can have > semantics, due to their substance or their processes, just as in the > case of the brain. No, not if by "computer" you mean "software/hardware system". Although we might call the brain a type of computer, we cannot call it a computer of the s/h system type because the brain has semantics and s/h systems do not. Your p-neurons equal s/h systems, and in your thought experiments you network these s/h systems and then imagine that networked s/h systems have semantics. > Yes, but the man in the room has an advantage over the > neurons in the brain, because he at least understands that he is > doing some sort of weird task, while the neurons understand nothing at > all. You would have to conclude that if the CR does not understand > Chinese, then a Chinese speaker's brain understands it even less. I would only draw that conclusion if I did not accept that real chinese brains are not s/h systems. In other words, I think you miss the lesson of the experiment, which is that real brains/minds do something we don't yet fully understand. They ground symbols, something s/h systems cannot do. This leads to the next phase in the argument: that real brains have evolved a biological, non-digital means for grounding symbols. > I think it is logically impossible to create weak AI > neurons. If weak AI neurons were possible, then it would be > possible to arbitrarily remove any aspect of your consciousness > leaving you not only behaving as if nothing had changed but also > unaware that anything had changed. This would seem to go against any > coherent notion of consciousness: however mysterious and ineffable it > may be, you would at least expect that if your consciousness changed, > for example if you suddenly went blind or aphasic, that you would notice > something a bit out of the ordinary had happened. If you think that > imperceptible radical change in consciousness is not self-contradictory, > then I suppose weak AI neurons are logically possible. But you would > then have the problem of explaining how you know now that you have not > gone blind or aphasic without realising it, and why you should care if > you had such an affliction. If you replace the neurons associated with "realizing it" then the patient will not realize it. If you leave those neurons alone but replace the neurons in other important parts of the brain, the patient will become a basket case in need of more surgery, as we have discussed already. It seems to me that in your laboratory you create many kinds of strange Frankenstein monsters that think and do many absurd and self-contradictory things, depending on which neurons you replace, and that you then try to draw meaningful conclusions based on the disturbed thoughts and behaviors of the monsters that you have yourself created. In the final analysis, will a person whose brain consists entirely of p-neurons have strong AI? I think the answer is no, for the same reason that I think a network of ordinary computers does not. -gts From gts_2000 at yahoo.com Wed Jan 13 13:54:09 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 05:54:09 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <510795.43385.qm@web36503.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > Searle's response is for the man to internalise the cards > and rules so that the room is eliminated. He then says that the man is > the whole system and still doesn't understand Chinese, therefore the > system doesn't understand Chinese. Right. > But that just means that Searle > does not understand the concept of a system. The point is that the man now IS the system. He becomes the room that some detractors insisted understood the symbols even if the man inside did not. He now has everything the room had, yet neither he nor anything inside him understands. The entire CR thought experiment was just a parable to help people see the obvious: that syntax is not sufficient for semantics -- that mere knowledge of how to manipulate symbols is not sufficient for gleaning their meanings. But some people missed the point and attacked the parable. Any 7th grade English teacher will teach the same thing that Searle taught: that understanding of syntax does not in itself lead to understanding of word meanings. One cannot become conversant in any language without understanding both its syntax (grammar) and its semantics (vocabulary), and the two things are different. Software/hardware systems *seem* to get semantics only because the programmer got semantics in elementary school, and then learned in college how to simulate semantics with syntax in formal programs, and then only if the computer operator either doesn't understand this or pretends it isn't so. -gts From stathisp at gmail.com Wed Jan 13 13:55:17 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 00:55:17 +1100 Subject: [ExI] Meaningless Symbols In-Reply-To: <199136.9587.qm@web36503.mail.mud.yahoo.com> References: <199136.9587.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/13 Gordon Swobe : > --- On Tue, 1/12/10, Stathis Papaioannou wrote: > >> I don't accept that semantics does not come from syntax >> because I don't see where else, logically, semantics could come from. >> However, if I accept it for the sake of argument, you have agreed in >> the past that running a program incidentally will not destroy >> semantics. So it is possible for you to consistently to hold that >> semantics does not come from syntax *and* that computers can have >> semantics, due to their substance or their processes, just as in the >> case of the brain. > > No, not if by "computer" you mean "software/hardware system". > > Although we might call the brain a type of computer, we cannot call it a computer of the s/h system type because the brain has semantics and s/h systems do not. > > Your p-neurons equal s/h systems, and in your thought experiments you network these s/h systems and then imagine that networked s/h systems have semantics. Running formal programs does not (you claim) produce semantics, but neither does it prevent semantics. Therefore, computers can have semantics by virtue of some quality other than running formal programs. >> Yes, but the man in the room has an advantage over the >> neurons in the brain, because he at least understands that he is >> doing some sort of weird task, while the neurons understand nothing at >> all. You would have to conclude that if the CR does not understand >> Chinese, then a Chinese speaker's brain understands it even less. > > I would only draw that conclusion if I did not accept that real chinese brains are not s/h systems. In other words, I think you miss the lesson of the experiment, which is that real brains/minds do something we don't yet fully understand. They ground symbols, something s/h systems cannot do. That misses the point of the CRA, which is to show that the man has no understanding of Chinese, therefore the system has no understanding of Chinese. The argument ought not assume from the start that the CR has no understanding of Chinese on account of it being a S/H system, since that is the point at issue. So with the brain: the neurons don't understand Chinese, therefore the brain doesn't understand Chinese. But the brain does understand Chinese; so the claim that if the components of a system don't have understanding then neither does the system is not valid. > This leads to the next phase in the argument: that real brains have evolved a biological, non-digital means for grounding symbols. > >> I think it is logically impossible to create weak AI >> neurons. If weak AI neurons were possible, then it would be >> possible to arbitrarily remove any aspect of your consciousness >> leaving you not only behaving as if nothing had changed but also >> unaware that anything had changed. This would seem to go against any >> coherent notion of consciousness: however mysterious and ineffable it >> may be, you would at least expect that if your consciousness changed, >> for example if you suddenly went blind or aphasic, that you would notice >> something a bit out of the ordinary had happened. If you think that >> imperceptible radical change in consciousness is not self-contradictory, >> then I suppose weak AI neurons are logically possible. But you would >> then have the problem of explaining how you know now that you have not >> gone blind or aphasic without realising it, and why you should care if >> you had such an affliction. > > If you replace the neurons associated with "realizing it" then the patient will not realize it. If you leave those neurons alone but replace the neurons in other important parts of the brain, the patient will become a basket case in need of more surgery, as we have discussed already. No, he won't become a basket case. If the patient's visual cortex is replaced and the rest of his brain is intact then (a) he will behave as if he has normal vision because his motor cortex receives the same signals as before, and (b) he will not notice that anything has changed about his vision, since if he did he would tell you and that would constitute a change in behaviour, as would going crazy. These two things are *logically* required if you accept that p-neurons of the type described are possible. There are several ways out of the conundrum: (1) p-neurons are impossible, because they won't behave like b-neurons (i.e. there is something uncomputable about the behaviour of neurons); (2) p-neurons are possible, but zombie p-neurons are impossible; (3) zombie p-neurons are possible and your consciousness will fade away without you noticing if they are installed in your head; (4) zombie p-neurons are possible and you will notice your consciousness fading away if they are installed in your head but you won't be able to do anything about it. That covers all the possibilities. I favour (2). Searle favours (4), though apparently without realising that it entails an implausible form of dualism (your thinking is done by something other than your brain which functions in lockstep with your behaviour until the p-neurons are installed). Your answer is that the patient will go mad, but that simply isn't possible, since by the terms of the experiment his brain is constrained to behave as sanely as it would have without any tampering. I suspect you're making this point because you can see the absurdity the thought experiment is designed to demonstrate but don't feel comfortable committing to any of the above four options to get out of it. -- Stathis Papaioannou From stathisp at gmail.com Wed Jan 13 14:22:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 01:22:08 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <510795.43385.qm@web36503.mail.mud.yahoo.com> References: <510795.43385.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/14 Gordon Swobe : > --- On Wed, 1/13/10, Stathis Papaioannou wrote: > >> Searle's response is for the man to internalise the cards >> and rules so that the room is eliminated. He then says that the man is >> the whole system and still doesn't understand Chinese, therefore the >> system doesn't understand Chinese. > > Right. > >> But that just means that Searle >> does not understand the concept of a system. > > The point is that the man now IS the system. He becomes the room that some detractors insisted understood the symbols even if the man inside did not. He now has everything the room had, yet neither he nor anything inside him understands. The man physically constitutes the whole system but that does not mean that understanding at a higher level does not supervene on his low level symbol processing. That is what neurons do: the individual neurons are stupid, and they remain stupid despite the fact that intelligence and consciousness supervenes on their individually stupid behaviour. Perhaps a variant of the CR where there are *two* men cooperating in the symbol processing might drive home the point. Neither of the men understands Chinese; do you now think it is now possible that the system understands Chinese? What if the two men are telepathically linked so that they form one mind: does the system suddenly lose its understanding of Chinese that it had when they were separate? The CRA is meant to demonstrate that syntax cannot produce semantics without assuming it beforehand. The two man CR is even more closely analogous to the brain, so if the argument is that the two man CR does not have understanding, then it is also an argument that the brain of a Chinese speaker lacks understanding. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Jan 13 14:33:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 06:33:36 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <631550.23799.qm@web36505.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > The man physically constitutes the whole system but that > does not mean that understanding at a higher level does not supervene > on his low level symbol processing. Understanding at what higher level? The man stands in the middle of a field, naked, processing Chinese symbols in his head according to the syntactic rules specified in the program. Show me who or what understands the symbols. -gts From stathisp at gmail.com Wed Jan 13 15:15:43 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 02:15:43 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <631550.23799.qm@web36505.mail.mud.yahoo.com> References: <631550.23799.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/14 Gordon Swobe : > --- On Wed, 1/13/10, Stathis Papaioannou wrote: > >> The man physically constitutes the whole system but that >> does not mean that understanding at a higher level does not supervene >> on his low level symbol processing. > > Understanding at what higher level? > > The man stands in the middle of a field, naked, processing Chinese symbols in his head according to the syntactic rules specified in the program. Show me who or what understands the symbols. Suppose neurons are smart enough to understand their individual job, such as that they have to fire when they see a certain concentration of neurotransmitter, but not smart enough to understand the big picture. These neurons are in a Chinese speaker's head, and the rest of the cells in his body are no smarter than the neurons. Show me who or what understands Chinese. -- Stathis Papaioannou From jonkc at bellsouth.net Wed Jan 13 17:33:10 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 13 Jan 2010 12:33:10 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <510795.43385.qm@web36503.mail.mud.yahoo.com> References: <510795.43385.qm@web36503.mail.mud.yahoo.com> Message-ID: <941FEB19-1E85-48B8-B05E-1A07D57F843F@bellsouth.net> On Jan 13, 2010, Gordon Swobe wrote: > The point is that the man now IS the system. He becomes the room that some detractors insisted understood the symbols even if the man inside did not. He now has everything the room had, yet neither he nor anything inside him understands. That is your error right there "nor anything inside him". After this huge, gigantic, ridiculously large transformation you continue to talk about "the man" as if nothing has happened and as if what's inside him is still just one thing. In your thought experiment you don't give us one shred of evidence that there is no understanding inside the man, you simply state it and then demand that we explain that fact. Well it's not a fact, it's just another of your decrees. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Wed Jan 13 17:14:43 2010 From: aware at awareresearch.com (Aware) Date: Wed, 13 Jan 2010 09:14:43 -0800 Subject: [ExI] Seeing Through New Lenses In-Reply-To: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> Message-ID: Forwarding an article with relevance to the Extropy list. It's ironic that in a forum dedicated to the topic of "extropy" , which might be interpreted as increasingly meaningful increasing change, there's such vocal support for hard rationality, simple truth, and seeing the world the way way it "really is." - Jef ---------------------- Edge Perspectives with John Hagel: Relationships and Dynamics - Seeing Through New Lenses Relationships and Dynamics - Seeing Through New Lenses Do we all look at the world in the same way? Hardly. We can each look at the same scene and focus our attention on something completely different.? Individual idiosyncrasies definitely play a role, but broader patterns of perception are at work as well. Are certain patterns of perception more or less helpful in these rapidly changing times?? Most definitely ? in fact, they may determine who succeeds and who fails. About five years ago, Richard Nisbett, a professor of psychology, wrote "The Geography of Thought." This fascinating book drew on extensive research pointing to fundamental cultural differences in how we see the world. Specifically, he contrasted an East Asian way of seeing the world with a more traditional Western way of seeing. While it would be difficult to summarize Nisbett?s rich analysis, I want to focus on a key distinction that he develops in his analysis of two cultural ways of perceiving our world.? He suggests that East Asians focus on relationships as the key dimension of the world around us while Westerners tend to focus more on isolated objects.? In other words, East Asians tend to adopt more holistic views of the world while Westerners are more oriented to reductionist views. This basic difference plays out in fascinating ways, including the greater attention by East Asian children to verbs while Western children tend to learn nouns faster. One very tangible illustration of this is a simple test reported by Nisbett. A developmental psychologist showed three pictures to children ? a cow, a chicken and some grass. He asked children from America which two of the pictures belonged together.? Most of them grouped the cow and chicken together because they were both objects in the same category of animals.? Chinese children on the other hand tended to group the cow and grass together because ?cows eat grass? ? they focused on the relationship between two objects rather than the objects themselves. [Which of these do YOU prefer? Which of these do you think is closer to The Truth? Why? - Jef] I found this intriguing in the context of our continuing work at the Center for the Edge on the Big Shift.? As I indicated in a previous posting, the Big Shift is a movement from a world where value creation depends on knowledge stocks to one where value resides in knowledge flows ? in other words, objects versus relationships. Our Western way of perceiving has been very consistent with a world of knowledge stocks and short-term transactions.? As we move into a world of knowledge flows, though, I suspect the East Asian focus on relationships may be a lot more helpful to orient us (no pun intended). Of course, this is not an either/or proposition.? Nisbett holds out hope that these perspectives might ultimately converge, citing some promising research evidence: ?So, I believe the twain shall meet by virtue of each moving in the direction of the other.? East and West may contribute to a blended world where social and cognitive aspects of both regions are represented but transformed ? like the individual ingredients in a stew that are recognizable but are altered as they alter the whole. It may not be too much to hope that this stew will contain the best of each culture.? But wait, there is more.? The distinction between perception of objects and relationships is just one dimension of difference.? In fact, the East Asian and Western modes of seeing share one common element: they view the world as largely static. As Nisbett points out, the Greek philosophers gave us the notion that ?the world is fundamentally static and unchanging.? East Asians tend to focus on oscillations and cycles which acknowledge change but contain it in relatively narrow fields ? the world is in flux but it does not head in fundamentally different directions over long periods of time. So, there is another dimension that differentiates perception ? and this is a point that Nisbett sadly does not explore or develop.? Some of us tend to view the world in static terms while others focus on the deep dynamics that lead to fundamental transformations over time. Many executives, especially in large firms, tend to adopt a static view of the world.? They want detailed snapshots of their environments to drive their decision-making.? When they go to distant countries and markets, they carefully? observe the state of play as it is today, but they rarely ask for ?videos? ? detailed analyses of the trajectories of change that have been playing out over years and are likely to shape future markets.? Even in the more contemporary world of social network analysis, this analysis often remains highly static ? elegant maps show the rich structures of these social networks as they exist today, but they rarely reveal the dynamics that evolve these networks over time. Why is this the case? Many factors contribute to this static view of the world.? Modern enterprise is built on the notion of scalable efficiency and scalable efficiency requires predictability. Predictions are much easier in stable or static worlds, so executives are predisposed to see the world in these terms.? Change can be highly unpredictable and can rapidly call into question the ability to predict demand for products or services.? Whether one sees in terms of objects or relationships, these are much easier to understand and analyze if they remain stable.? Contemporary economics is largely built around equilibrium models that are essential if the detailed econometric analytics are to work. Social networks are complex and messy as it is, without having to factor in even more complex dynamics that continually reshape these networks over time. We don?t even have a very robust set of categories to describe various trajectories that can play out over time. Let?s face it, life would be a lot simpler if everything just came to a halt and stayed the way it is right now. But, of course, it does not stand still.? Our world is constantly evolving in complex and unexpected ways. And there is evidence that it is evolving ever more rapidly, generating disruptions that send people and things careening in new and unanticipated directions. Product life cycles are compressing across many, if not most, industries.? The movement from products to services as key drivers of growth reinforces this trend, since services can often be updated far more frequently than products. With the growth of outsourcing, new competitors can enter and scale positions in global markets in ways that simply were not feasible in the past when capital intensive physical facilities needed to be built before products could be launched.? Edges of new innovation rise quickly and gather force to challenge entrenched positions in the core of our global economy. Black swans pop up with increasing frequency, seemingly out of nowhere and challenging some of our most basic assumptions about the world around us. Yet, we do not have very good lenses or analytic tools to bring these dynamics to the forefront.? They tend to operate behind the scenes, rarely seen until it is too late and the latest disruption is enveloping us.? Survival in this more rapidly changing world requires developing new modes of perception, ones that put structure in the background and focus attention on the deep dynamics that are re-shaping the structures around us. This is the other key message of the Big Shift work.? We are going through a profound long-term shift in the way our global business landscapes are evolving.? We get so caught up in short-term events that we lose sight of these long-term changes, much less understanding what is driving them or thinking about their implications for how we work and live.? As we have emphasized, we must learn to make sense of the changes unfolding around us before we can make progress.? Even more fundamentally, we must learn to see these changes, searching them out where they remain hidden or obscured and penetrating through the surface currents of change to focus on the deeper dynamics shaping these currents. What is required to do this?? Well, first we need to embrace change rather than dampen or suppress it.? Virginia Postrel wrote "The Future and Its Enemies" over a decade ago, a fascinating book that described a persistent and intensifying conflict between stasists, those who fear and resist change, and dynamists, those who welcome change as an opportunity to create even more value for more people.? Those who fear and resist change spend relatively little time understanding change ? all of their energy is focused on blocking it. By embracing change, we begin to see the opportunities it creates.? We are motivated to explore the contours of change in ways that moves us from focusing on what is to what could be. As we begin this migration, we will need new analytic tools to help us on our way.? Promising early toolkits can be found in diverse arenas. For example, the Santa Fe Institute is studying the evolution of? complex adaptive systems and increasing returns dynamics.? On another front, the revival of Austrian economics challenges equilibrium analysis and instead focuses on processes of change unleashed by distributed tacit knowledge, inspired by the early work of Friedrich Hayek. In yet another arena, work in the technology world seeks to understand the implications of continuing exponential improvement in the price/performance of digital technology as it breeches the boundaries of computing and invades such diverse arenas as biology, materials science and robotics. Stepping back from all of this, the challenge is great, especially for those of us in the West.? We must learn to shift attention from objects to relationships while at the same time moving from structure to dynamics as the key lens for perception.? We were not trained this way.? We generally have not operated in this way. All of our assumptions tell us that this is the wrong way. Yet, there are enormous opportunities for those who do make this shift.? Perhaps most importantly, those of us who remain wedded to the old way of seeing things will find ourselves increasingly stressed, blindsided and marginalized in a world that will continue to move on without us. From thespike at satx.rr.com Wed Jan 13 18:31:36 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 13 Jan 2010 12:31:36 -0600 Subject: [ExI] Seeing Through New Lenses In-Reply-To: References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> Message-ID: <4B4E1188.7050603@satx.rr.com> On 1/13/2010 11:14 AM, Aware wrote: > A developmental psychologist showed three pictures to > children ? a cow, a chicken and some grass. He asked children from > America which two of the pictures belonged together. Most of them > grouped the cow and chicken together because they were both objects in > the same category of animals. Chinese children on the other hand > tended to group the cow and grass together because ?cows eat grass? ? > they focused on the relationship between two objects rather than the > objects themselves. > > [Which of these do YOU prefer? Which of these do you think is closer > to The Truth? Why? - Jef] The cow and the chicken, because they are Friends. From spike66 at att.net Wed Jan 13 18:53:32 2010 From: spike66 at att.net (spike) Date: Wed, 13 Jan 2010 10:53:32 -0800 Subject: [ExI] Seeing Through New Lenses In-Reply-To: <4B4E1188.7050603@satx.rr.com> References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> <4B4E1188.7050603@satx.rr.com> Message-ID: <248C93189859442BBF93F46F962216C8@spike> ... > > > A developmental psychologist showed three pictures to children - a > > cow, a chicken and some grass. He asked children from America which > > two of the pictures belonged together. Most of them > grouped the cow > > and chicken together because they were both objects in the same > > category of animals. Chinese children on the other hand tended to > > group the cow and grass together because "cows eat grass" - they > > focused on the relationship between two objects rather than the > > objects themselves. > > > > [Which of these do YOU prefer? Which of these do you think > is closer > > to The Truth? Why? - Jef] > > The cow and the chicken, because they are Friends. Damien The chicken and the grass belong together, with with the cow being the odd lifeform out. Clearly neither the chicken nor the grass feed their offspring directly from glands on their bodies. spike From natasha at natasha.cc Wed Jan 13 19:22:55 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 13 Jan 2010 14:22:55 -0500 Subject: [ExI] Seeing Through New Lenses In-Reply-To: <248C93189859442BBF93F46F962216C8@spike> References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> <4B4E1188.7050603@satx.rr.com> <248C93189859442BBF93F46F962216C8@spike> Message-ID: <20100113142255.jvam7een28ws0wok@webmail.natasha.cc> LOL Quoting spike : > ... >> >> > A developmental psychologist showed three pictures to children - a >> > cow, a chicken and some grass. He asked children from America which >> > two of the pictures belonged together. Most of them >> grouped the cow >> > and chicken together because they were both objects in the same >> > category of animals. Chinese children on the other hand tended to >> > group the cow and grass together because "cows eat grass" - they >> > focused on the relationship between two objects rather than the >> > objects themselves. >> > >> > [Which of these do YOU prefer? Which of these do you think >> is closer >> > to The Truth? Why? - Jef] >> >> The cow and the chicken, because they are Friends. Damien > > > The chicken and the grass belong together, with with the cow being the odd > lifeform out. Clearly neither the chicken nor the grass feed their > offspring directly from glands on their bodies. > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From gts_2000 at yahoo.com Wed Jan 13 19:50:13 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 11:50:13 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <321375.66546.qm@web36501.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > Suppose neurons are smart enough to understand their > individual job, such as that they have to fire when they see a > certain concentration of neurotransmitter, but not smart enough to > understand the big picture. These neurons are in a Chinese speaker's > head, and the rest of the cells in his body are no smarter than the > neurons. Show me who or what understands Chinese. In that case the system understands Chinese. Evidently it learned it somewhere, and as far I know only human systems can do it. The question *here* concerns whether people or computers can learn or understand Chinese from following rules of syntax only, because formal programs have only rules of syntax. Again I ask you: The Englishman stands naked in a field. He represents the entire system. He and his neurons (trillions upon trillions upon trillions of them if you like) process Chinese symbols according to the *syntactic rules specified in a program* which he and his neurons have memorized. Show me who or what understands the meanings of the symbols. If you cannot then you agree with your 7th grade English teacher who knew that following the rules of grammar (syntax) is not the same as understanding the meanings of the words (semantics). That's why your teacher tested your grammar and vocabulary skills on different days of the week. *They're different subjects*. You used to know this. -gts From gts_2000 at yahoo.com Wed Jan 13 20:05:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 12:05:59 -0800 (PST) Subject: [ExI] Meaningless Symbols. Message-ID: <483027.77459.qm@web36506.mail.mud.yahoo.com> By the way I assume by "chinese speaker" in your example below that you refer to a native Chinese speaker, or to some person/system who understands Chinese as people actually understand it, i.e., by means other than running syntactic programs in their heads. -gts --- On Wed, 1/13/10, Gordon Swobe wrote: > From: Gordon Swobe > Subject: Re: [ExI] Meaningless Symbols. > To: "ExI chat list" > Date: Wednesday, January 13, 2010, 2:50 PM > --- On Wed, 1/13/10, Stathis > Papaioannou > wrote: > > > Suppose neurons are smart enough to understand their > > individual job, such as that they have to fire when > they see a > > certain concentration of neurotransmitter, but not > smart enough to > > understand the big picture. These neurons are in a > Chinese speaker's > > head, and the rest of the cells in his body are no > smarter than the > > neurons. Show me who or what understands Chinese. > > In that case the system understands Chinese. Evidently it > learned it somewhere, and as far I know only human systems > can do it. > > The question *here* concerns whether people or computers > can learn or understand Chinese from following rules of > syntax only, because formal programs have only rules of > syntax. > > Again I ask you: > > The Englishman stands naked in a field. He represents the > entire system. He and his neurons (trillions upon trillions > upon trillions of them if you like) process Chinese symbols > according to the *syntactic rules specified in a program* > which he and his neurons have memorized. Show me who or what > understands the meanings of the symbols. > > If you cannot then you agree with your 7th grade English > teacher who knew that following the rules of grammar > (syntax) is not the same as understanding the meanings of > the words (semantics). > > That's why your teacher tested your grammar and vocabulary > skills on different days of the week. *They're different > subjects*. You used to know this. > > -gts > > > > ? ? ? > From eric at m056832107.syzygy.com Wed Jan 13 20:35:37 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 13 Jan 2010 20:35:37 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <321375.66546.qm@web36501.mail.mud.yahoo.com> References: <321375.66546.qm@web36501.mail.mud.yahoo.com> Message-ID: <20100113203537.5.qmail@syzygy.com> Gordon asks: > >The Englishman stands naked in a field. He represents the entire > system. He and his neurons (trillions upon trillions upon trillions > of them if you like) process Chinese symbols according to the > *syntactic rules specified in a program* which he and his neurons > have memorized. Show me who or what understands the meanings of the > symbols. That's easy: the data state of the system is where the understanding is. Syntactic processing involves keeping machine state. In this case, that state might represent interconnections between neurons, and the strengths of those interconnections. Those connections and strengths change over time, and are data which can be syntactically manipulated to model those changes. The changes represent learning. The semantics is learned based on experience. The semantics is encoded in that data. If you take that same data representing a system which understands something and use it to drive another computational process based on a different substrate, the resulting system will still understand the same things. If neural interconnections in a human brain have achieved some understanding, we can (theoretically) extract that understanding and move it to another substrate, like computationally based neurons. Oh, and symbol grounding is learned based on interactions with external entities. Why is this such a mystery? -eric From stathisp at gmail.com Thu Jan 14 00:20:17 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 11:20:17 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <321375.66546.qm@web36501.mail.mud.yahoo.com> References: <321375.66546.qm@web36501.mail.mud.yahoo.com> Message-ID: 2010/1/14 Gordon Swobe : > --- On Wed, 1/13/10, Stathis Papaioannou wrote: > >> Suppose neurons are smart enough to understand their >> individual job, such as that they have to fire when they see a >> certain concentration of neurotransmitter, but not smart enough to >> understand the big picture. These neurons are in a Chinese speaker's >> head, and the rest of the cells in his body are no smarter than the >> neurons. Show me who or what understands Chinese. > > In that case the system understands Chinese. Evidently it learned it somewhere, and as far I know only human systems can do it. Hence the point: the system understands even though the parts of it don't. We already knew that was the case, so the CR does not add anything to the discussion. > The question *here* concerns whether people or computers can learn or understand Chinese from following rules of syntax only, because formal programs have only rules of syntax. Which the CRA does not help with. The man manipulates symbols without understanding them and so do the neurons. > Again I ask you: > > The Englishman stands naked in a field. He represents the entire system. He and his neurons (trillions upon trillions upon trillions of them if you like) process Chinese symbols according to the *syntactic rules specified in a program* which he and his neurons have memorized. Show me who or what understands the meanings of the symbols. > > If you cannot then you agree with your 7th grade English teacher who knew that following the rules of grammar (syntax) is not the same as understanding the meanings of the words (semantics). > > That's why your teacher tested your grammar and vocabulary skills on different days of the week. *They're different subjects*. You used to know this. In the first grade, the teacher made mouth noises and pointed to objects or pictures of objects. In later years it was more often relating one set of mouth noises to another set of mouth noises which had already been learned. -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 14 01:23:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 13 Jan 2010 17:23:45 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <399013.3045.qm@web36505.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > Hence the point: the system understands even though the > parts of it don't. We already knew that was the case, so the CR > does not add anything to the discussion. Forget about the CR. Neither of us care if parts of the system understand anything. We want to know if the system as a whole knows Chinese from manipulating Chinese symbols according to rules of syntax. It cannot, because syntax only tells the system 'what' to put 'where' and 'when'. The system looks at the forms of things, not at the meanings of things. Here's the classic one-line program: print "Hello World" It takes the form The system does not understand or care about the semantic drivel you put in the string. It just follows the syntactic rule (and it doesn't care about that either, by the way) and prints the contents of the string. Do you think the system understands the string? Do you think that upon running this program, a little conscious entity inside your computer will greet you? Seriously, Stathis, what do you think? And by the way the most sophisticated program possible on a s/h system will differ in no philosophically important way from this one. -gts From eric at m056832107.syzygy.com Thu Jan 14 01:50:56 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 14 Jan 2010 01:50:56 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <399013.3045.qm@web36505.mail.mud.yahoo.com> References: <399013.3045.qm@web36505.mail.mud.yahoo.com> Message-ID: <20100114015056.5.qmail@syzygy.com> Gordon writes: > >Here's the classic one-line program: > >print "Hello World" > >It takes the form > > > >Do you think the system understands the string? Of course not. If there is any understanding here, it is of the word "print". The interpreter maps the word "print" into an external action, and as a result ends up making the string visible. The CPU/RAM, etc.. in the machine (the hardware) has no understanding of "print". That understanding (such as it is) is encoded in the software running on the computer. Your understanding of the word "print" is not inherent in your brain, just as it isn't in the CPU. When you were born, you had the capacity to learn English, including the word "print", but you could have been taught Chinese instead. That teaching changed the neural interconnections in your brain, changing the way it reacts to English words, just as the interpreter program changes the way the computer hardware reacts to programming constructs like "print". Your understanding is encoded in those interconnections. Understanding cannot be encoded in a single neuron, just as it cannot be encoded in a single transistor. It is the system of interconnections which learns to understand. That system of interconnections can be treated as data, and can be manipulated by programs using purely syntactic rules. -eric From stathisp at gmail.com Thu Jan 14 01:56:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 12:56:38 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <399013.3045.qm@web36505.mail.mud.yahoo.com> References: <399013.3045.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/14 Gordon Swobe : > --- On Wed, 1/13/10, Stathis Papaioannou wrote: > >> Hence the point: the system understands even though the >> parts of it don't. We already knew that was the case, so the CR >> does not add anything to the discussion. > > Forget about the CR. Neither of us care if parts of the system understand anything. We want to know if the system as a whole knows Chinese from manipulating Chinese symbols according to rules of syntax. > > It cannot, because syntax only tells the system 'what' to put 'where' and 'when'. The system looks at the forms of things, not at the meanings of things. > > Here's the classic one-line program: > > print "Hello World" > > It takes the form > > > > The system does not understand or care about the semantic drivel you put in the string. It just follows the syntactic rule (and it doesn't care about that either, by the way) and prints the contents of the string. > > Do you think the system understands the string? Do you think that upon running this program, a little conscious entity inside your computer will greet you? Seriously, Stathis, what do you think? No, because this program is less complex than even a single neuron. > And by the way the most sophisticated program possible on a s/h system will differ in no philosophically important way from this one. The problem is that you can't explain how humans get their understanding. It doesn't help to say that some physical activity happens in neurons which produces the understanding, not because you haven't given the details of the physical activity, but because you haven't explained how, in general terms, it is possible for the physical activity in a brain to pull off that trick but not the physical activity in a computer. Even if it's true that computers only do syntax and syntax can't produce meaning (it isn't, since logically there is nowhere else for meaning to come from) this does not mean that computers can't produce meaning. It would be like saying brains only do chemistry and chemistry can't produce meaning. In the course of the chemistry brains manipulate symbols and that's where the meaning comes from if you believe meaning can only come from symbol manipulation; and in the course of manipulating symbols computers are physically active and that's where the meaning comes from if you believe meaning can only come from physical activity. -- Stathis Papaioannou From max at maxmore.com Thu Jan 14 02:21:46 2010 From: max at maxmore.com (Max More) Date: Wed, 13 Jan 2010 20:21:46 -0600 Subject: [ExI] The Nature of Technology: What It Is and How It Evolves (Brian Arthur) Message-ID: <201001140222.o0E2M4Hv027890@andromeda.ziaspace.com> Many of the economically-inclined people here will be familiar with the previous work of economist W. Brian Arthur. I just read a disappointing interview with him? The Evolution of Technology by Art Kleiner strategy+business, January 4, 2010 http://www.strategy-business.com/article/00014?pg=all My review/commentary (written for executives) is here: http://www.manyworlds.com/exploreco.aspx?coid=CO1121015233189 http://www.manyworlds.com/exploreCO.aspx?coid=CO1121015233189 His new book is The Nature of Technology: What It Is and How It Evolves. Has anyone read it? Is it drastically better than this interview suggests? (My doubt is partly because I know Kleiner is a smart guy and a practiced interviewer, so the lack of real content is unlikely to be his fault.) Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From avantguardian2020 at yahoo.com Thu Jan 14 02:34:42 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 13 Jan 2010 18:34:42 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <399013.3045.qm@web36505.mail.mud.yahoo.com> Message-ID: <876885.6117.qm@web65609.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: gordon.swobe at yahoo.com; ExI chat list > Sent: Wed, January 13, 2010 5:56:38 PM > Subject: Re: [ExI] Meaningless Symbols. > > The problem is that you can't explain how humans get their > understanding. It doesn't help to say that some physical activity > happens in neurons which produces the understanding, not because you > haven't given the details of the physical activity, but because you > haven't explained how, in general terms, it is possible for the > physical activity in a brain to pull off that trick but not the > physical activity in a computer. Even if it's true that computers only > do syntax and syntax can't produce meaning (it isn't, since logically > there is nowhere else for meaning to come from) this does not mean > that computers can't produce meaning. *China room no important me?say* ?If you can understand that, then syntax is not all that?relevant to?human understanding at a fundamental level. A similarly scrambled?statement in any?scripting language?that I can think of would have caused the program to halt. Yet your brain takes it stride and understands. *This* is what I think is fascinating.?? > It would be like saying brains > only do chemistry and chemistry can't produce meaning. In the course > of the chemistry brains manipulate symbols and that's where the > meaning comes from if you believe meaning can only come from symbol > manipulation; and in the course of manipulating symbols computers are > physically active and that's where the meaning comes from if you > believe meaning can only come from physical activity. Brains, being part of the real world, do it all. Chemistry is merely a model that simplifies a small part of what?reality does so that we can discuss it and think about it. But there are things that the universe does for which?there are not yet?words, symbols, or concepts.?Imagine trying to explain "quantum erasure" to?Plato in ancient Greek and you will see what I am getting at.?? Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From emlynoregan at gmail.com Thu Jan 14 03:31:01 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 14 Jan 2010 14:01:01 +1030 Subject: [ExI] The Nature of Technology: What It Is and How It Evolves (Brian Arthur) In-Reply-To: <201001140222.o0E2M4Hv027890@andromeda.ziaspace.com> References: <201001140222.o0E2M4Hv027890@andromeda.ziaspace.com> Message-ID: <710b78fc1001131931kb38a2dcwc49cd41a02ec02c7@mail.gmail.com> 2010/1/14 Max More : > Many of the economically-inclined people here will be familiar with the > previous work of economist W. Brian Arthur. I just read a disappointing > interview with him? > > The Evolution of Technology > by Art Kleiner > strategy+business, January 4, 2010 > http://www.strategy-business.com/article/00014?pg=all > > My review/commentary (written for executives) is here: > http://www.manyworlds.com/exploreco.aspx?coid=CO1121015233189 > http://www.manyworlds.com/exploreCO.aspx?coid=CO1121015233189 > > His new book is The Nature of Technology: What It Is and How It Evolves. Has > anyone read it? Is it drastically better than this interview suggests? (My > doubt is partly because I know Kleiner is a smart guy and a practiced > interviewer, so the lack of real content is unlikely to be his fault.) > > Max I quite liked that interview; there's no depth, but then it's only short. But the fundamental point that the economy is an organizing system for technology is a good one. It reminds me of Kevin Kelly's talk "What Technology Wants" (I haven't read his book). Is there something particular you disagreed with in this interview? Or it's just lightweight? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From stathisp at gmail.com Thu Jan 14 03:59:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 14 Jan 2010 14:59:38 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <876885.6117.qm@web65609.mail.ac4.yahoo.com> References: <399013.3045.qm@web36505.mail.mud.yahoo.com> <876885.6117.qm@web65609.mail.ac4.yahoo.com> Message-ID: 2010/1/14 The Avantguardian : > *China room no important me?say* > > ?If you can understand that, then syntax is not all that?relevant to?human understanding at a fundamental level. A similarly scrambled?statement in any?scripting language?that I can think of would have caused the program to halt. Yet your brain takes it stride and understands. *This* is what I think is fascinating. A human natural language is not like a programming language but more like a complex application written in the programming language. The programming language is like the genetic code, which cannot cope with syntactical errors. And the genetic code is itself an example of an abstract program which, when implemented, gives rise to intelligence and consciousness. -- Stathis Papaioannou From max at maxmore.com Thu Jan 14 05:38:36 2010 From: max at maxmore.com (Max More) Date: Wed, 13 Jan 2010 23:38:36 -0600 Subject: [ExI] The Nature of Technology: What It Is and How It Evolves(Brian Arthur) Message-ID: <201001140538.o0E5cqqT003998@andromeda.ziaspace.com> >Is there something particular you disagreed with in this interview? No. >Or it's just lightweight? Yes, very. And surprisingly so. It IS just an interview, so maybe the book is great. That's what I'm hoping to find out before actually reading it. (Too many books already in the stack.) Max From bbenzai at yahoo.com Thu Jan 14 12:05:21 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 14 Jan 2010 04:05:21 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <296296.66105.qm@web113614.mail.gq1.yahoo.com> Gordon Swobe claimed: > Here's the classic one-line program: > > print "Hello World" ... > > And by the way the most sophisticated program possible on a > s/h system will differ in no philosophically important way > from this one. That's just about the most ridiculous thing I've heard anyone say on this list. Or anywhere, for that matter. (I think it may be symptomatic of 'Terminal Confusion'!) Ben Zaiboc From gts_2000 at yahoo.com Thu Jan 14 13:07:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 05:07:15 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <876885.6117.qm@web65609.mail.ac4.yahoo.com> Message-ID: <62712.42339.qm@web36503.mail.mud.yahoo.com> --- On Wed, 1/13/10, The Avantguardian wrote: > Yet your brain takes it stride and understands... > *This* is what I think is fascinating.... > ...But there are things that the universe does for > which there are not yet words, symbols, or concepts. Right, Stuart! The human brain does things that software/hardware systems cannot and will never do. Its method remains a mystery for now, and that mystery makes some of us uncomfortable, but our discomfort does not change the facts. -gts From stathisp at gmail.com Thu Jan 14 13:28:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jan 2010 00:28:09 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <62712.42339.qm@web36503.mail.mud.yahoo.com> References: <876885.6117.qm@web65609.mail.ac4.yahoo.com> <62712.42339.qm@web36503.mail.mud.yahoo.com> Message-ID: 2010/1/15 Gordon Swobe : > --- On Wed, 1/13/10, The Avantguardian wrote: > >> Yet your brain takes it stride and understands... >> *This* is what I think is fascinating.... >> ...But there are things that the universe does for >> which there are not yet words, symbols, or concepts. > > Right, Stuart! > > The human brain does things that software/hardware systems cannot and will never do. Its method remains a mystery for now, and that mystery makes some of us uncomfortable, but our discomfort does not change the facts. You've previously implied that S/H systems *can* do everything the brain can, short of consciousness. Otherwise, zombies would be impossible. -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 14 13:37:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 05:37:59 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <2573.11215.qm@web36506.mail.mud.yahoo.com> --- On Wed, 1/13/10, Stathis Papaioannou wrote: > The problem is that you can't explain how humans get their > understanding. I can explain how they do not get their understanding, which brings us one step closer to understanding how they do. > It doesn't help to say that some physical activity > happens in neurons which produces the understanding, not > because you haven't given the details of the physical activity, but > because you haven't explained how, in general terms, it is possible for > the physical activity in a brain to pull off that trick but not > the physical activity in a computer. But I have. You just don't believe me or understand me or both. > Even if it's true that computers only do syntax and syntax can't > produce meaning (it isn't, since logically there is nowhere else for > meaning to come from) I think that last thought of yours needs some work. :) You say "logically there is nowhere else for meaning to come from", but *logically* nothing can get semantics from knowing rules of syntax, or vocabulary from knowing rules of grammar. Instead of accepting an illogical answer to the question of meaning as you seem wont to do, I submit that the only logical choice is to accept that brains do something we don't yet fully understand. That leaves us with a bit of a mystery, but at least we haven't sacrificed logic to get there. We would be pretty arrogant to pretend that we fully understand the human brain in 2010. We don't yet know even why George Foreman fell down in the 8th round against Muhammad Ali. Neuroscience is still in its infancy. -gts From gts_2000 at yahoo.com Thu Jan 14 13:49:05 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 05:49:05 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <822204.96676.qm@web36508.mail.mud.yahoo.com> --- On Thu, 1/14/10, Stathis Papaioannou wrote: > You've previously implied that S/H systems *can* do > everything the brain can, short of consciousness. That last thing you mention seems pretty important. -gts From stathisp at gmail.com Thu Jan 14 14:57:43 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jan 2010 01:57:43 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <2573.11215.qm@web36506.mail.mud.yahoo.com> References: <2573.11215.qm@web36506.mail.mud.yahoo.com> Message-ID: 2010/1/15 Gordon Swobe : >> It doesn't help to say that some physical activity >> happens in neurons which produces the understanding, not >> because you haven't given the details of the physical activity, but >> because you haven't explained how, in general terms, it is possible for >> the physical activity in a brain to pull off that trick but not >> the physical activity in a computer. > > But I have. You just don't believe me or understand me or both. You've said that formal programs can't produce understanding but physical activity can produce understanding. Computers not only run formal programs, they also do physical activity. You have a hunch that the sort of physical activity in computers is incapable of producing understanding. But a hunch is not good enough in a philosophical argument. To make your case you have to show that it is *impossible* for the physical activity in a computer to support understanding. For example, you would have to show that running a program actually prevents understanding. That would mean that electric circuits arranged in a complex and disorganised fashion that could not be seen as implementing a program could potentially have understanding but not if the same components were organised to form a computer. Is that right? >> Even if it's true that computers only do syntax and syntax can't >> produce meaning (it isn't, since logically there is nowhere else for >> meaning to come from) > > I think that last thought of yours needs some work. :) > > You say "logically there is nowhere else for meaning to come from", but *logically* nothing can get semantics from knowing rules of syntax, or vocabulary from knowing rules of grammar. It's true that given an unknown string of symbols it's impossible, even in principle, to work out their meaning even though you may be able to work out a syntax. However, you can ground the symbols by associating them with symbols you already know, a syntactic operation. And ultimately the symbols you already know are grounded by associating them with sense data, another syntactic operation. So syntax is both necessary and sufficient for semantics. How else can any entity, human or computer, possibly derive the meaning of something other than through a process like this? And my original point: even if you still believe meaning must come from the physical processes inside a brain, why can't it also come from the physical processes inside a computer? -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Jan 14 14:33:07 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 06:33:07 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <822204.96676.qm@web36508.mail.mud.yahoo.com> Message-ID: <935338.61673.qm@web36504.mail.mud.yahoo.com> Stathis Papaioannou wrote: > You've previously implied that S/H systems *can* do > everything the brain can, short of consciousness. This might be a good time to mention my view about the difference between "qualia" and "quality of consciousness". Following Chalmers, I once considered qualia as something like phenomenal entities contained by consciousness, as if consciousness could exist sans qualia. I think it better now to consider qualia as qualities of a single unified conscious experience, which is after all what qualia really means anyway. The experience of understanding words (of having semantics) counts as a quality of conscious experience. It's that experience among others that I think s/h systems cannot have. Point being that consciousness and semantics do not exist as independent concepts. You can't have the potential for one without the potential for the other in humans, but I think someday we'll simulate the appearance of both in s/h systems (weak AI). -gts From gts_2000 at yahoo.com Thu Jan 14 15:17:24 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 14 Jan 2010 07:17:24 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <805502.70698.qm@web36505.mail.mud.yahoo.com> --- On Thu, 1/14/10, Stathis Papaioannou wrote: > You've said that formal programs can't produce > understanding but physical activity can produce understanding. > Computers not only run formal programs, they also do physical activity. > You have a hunch that the sort of physical activity in computers is > incapable of producing understanding. But a hunch is not good enough in a > philosophical argument. I don't consider it a "hunch". I look at programs (and I write them) and I look at the hardware that implements them (and I work on that too) and I see only syntactical form-based operations. And I understand and agree with those who say syntax cannot give semantics, that grammar cannot give vocabulary. It's last point on which we disagree. You want to believe that performing form-based syntactic operations in software or hardware will magically give rise to human-like understanding. gotta run. more later -gts From emlynoregan at gmail.com Thu Jan 14 15:29:50 2010 From: emlynoregan at gmail.com (Emlyn) Date: Fri, 15 Jan 2010 01:59:50 +1030 Subject: [ExI] Simulation argument in Dinosaur Comics Message-ID: <710b78fc1001140729n1f60ac75q9d8917659054ab71@mail.gmail.com> http://www.qwantz.com/index.php?comic=1623 Nick Bostrom's cool just jumped up a level, what with being referred to by name by T-Rex. And he was already quite cool, by all accounts. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From natasha at natasha.cc Thu Jan 14 17:05:17 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 14 Jan 2010 11:05:17 -0600 Subject: [ExI] "Future Society" China - Max More, Cyrille Jegu, and Natasha Message-ID: <7EB4023A822F44118A6610C6A70A8F0B@DFC68LF1> "Future Society" was broadcast live on Tuesday. Here is a link http://english.cri.cn/08webcast/today.htm It was a living panel and Transhumanism, the Proactionary Principle, and Human Enhancement were the foci of discussion. Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From natasha at natasha.cc Thu Jan 14 17:09:26 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 14 Jan 2010 11:09:26 -0600 Subject: [ExI] Overposting to List (RE: Meaningless Symbols) In-Reply-To: <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> References: <331623.31673.qm@web36501.mail.mud.yahoo.com><08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> Message-ID: John, please be careful about over posting. Thank you, Natasha Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From jonkc at bellsouth.net Thu Jan 14 17:08:27 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 14 Jan 2010 12:08:27 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <399013.3045.qm@web36505.mail.mud.yahoo.com> References: <399013.3045.qm@web36505.mail.mud.yahoo.com> Message-ID: <1D0D195E-39CE-47C4-8CF2-5E3CF9FA5AB3@bellsouth.net> On Jan 13, 2010, Gordon Swobe wrote: > Forget about the CR. Neither of us care if parts of the system understand anything. We want to know if the system as a whole knows Chinese from manipulating Chinese symbols according to rules of syntax. It cannot, because syntax only tells the system 'what' to put 'where' and 'when'. DNA used "formal rules of syntax" that you have such contempt for to tell just 20 different types of small Amino Acids to go to certain very specific positions until they had formed something called "Gordon Swobe". As for the semantics of it, that is in the eye of the beholder not intrinsic to it. > The system looks at the forms of things, not at the meanings of things. You keep making the exact same error over and over again; you look at something that is grand and complex and break it down into smaller and smaller parts until you find that the part you're looking at is not very grand or complex at all, and then you announce that this proves that there must be some secret mysterious key ingredient that is missing from the analysis. But that's just silly, analysis is the process of breaking a complex topic or substance down into smaller parts to gain a better understanding of it; if the part is still mysterious then it's still too big and you need to break it down some more. On and off is not mysterious at all so I claim victory, you think that very lack of puzzlement is a sign of failure. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Jan 14 18:07:21 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 14 Jan 2010 13:07:21 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <321375.66546.qm@web36501.mail.mud.yahoo.com> References: <321375.66546.qm@web36501.mail.mud.yahoo.com> Message-ID: <763F1427-3EDA-47E0-AEC4-C245825F7A7B@bellsouth.net> On Jan 13, 2010, Gordon Swobe wrote: > The Englishman stands naked in a field. He represents the entire system. He and his neurons (trillions upon trillions upon trillions of them if you like) process Chinese symbols according to the *syntactic rules specified in a program* which he and his neurons have memorized. Show me who or what understands the meanings of the symbols. The trouble with all your thought experiments is that you come up with more and more bizarre situations and just say there is (or is not) understanding there, and then you challenge us to explain it; but you have no way of knowing what the little man understands either before or after he swallowed a book larger than the observable universe, all you can know is if he does (or does not) act intelligently. When Einstein did thought experiments everything followed logically, he didn't announce that the man on the train platform saw this and that unless it was obvious that is exactly what he would see, you are announcing things that a far from obvious and sometimes announcing the very thing you are trying to prove. > If you cannot then you agree with your 7th grade English teacher who knew that following the rules of grammar (syntax) is not the same as understanding the meanings of the words (semantics). Things may not be quite as clear cut as your 7th grade English teacher thought. I doubt if things in the quantum realm are much concerned with your teacher's opinion, and that's all semantics is, an opinion. If the Many Worlds interpretation of Quantum Mechanics is correct then in one of those worlds that has a different syntax it would be the opinion of the inhabitants that this exact same post is the operating instructions for a new type of aquarium air pump. And the syntax of your genome is very clear and specific, but tell me about its semantics. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Thu Jan 14 19:54:51 2010 From: scerir at libero.it (scerir) Date: Thu, 14 Jan 2010 20:54:51 +0100 (CET) Subject: [ExI] links only Message-ID: <11649444.861021263498891904.JavaMail.defaultUser@defaultHost> Elvis transhumanist? http://www.3quarksdaily.com/3quarksdaily/2010/01/suspicious-minds-wonder-was- elvis-a-transhumanist.html Nietzsche and the posthuman http://www.3quarksdaily.com/3quarksdaily/2010/01/nietzsche-and-our-posthuman- future.html Murray at home http://thesciencenetwork.org/programs/santa-fe-institute-2009/murray-gell- mann-at-home From max at maxmore.com Thu Jan 14 20:27:54 2010 From: max at maxmore.com (Max More) Date: Thu, 14 Jan 2010 14:27:54 -0600 Subject: [ExI] Beyond Beijing radio online link Message-ID: <201001142028.o0EKS34O011346@andromeda.ziaspace.com> We heard some good feedback on this interview. If you like listening to radio online... http://english.cri.cn/7146/2010/01/13/481s542097.htm Hour 1 is the one to listen to. ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From thespike at satx.rr.com Thu Jan 14 21:41:03 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 14 Jan 2010 15:41:03 -0600 Subject: [ExI] Art for Art's sake (well, for Damien's, anyway) Message-ID: <4B4F8F6F.8060602@satx.rr.com> Any ace Photoshoppers here who'd care to consider splicing a b&w mugshot into a Hubble skyscape (or the like) for a book cover? Would have to be done for the merriment of the thing, plus a cover credit line, but no pay, alas. Contact me offlist, pliz. Unless you can find some way to bring the Chinese Room into the thread. Damien Broderick From spike66 at att.net Thu Jan 14 22:29:15 2010 From: spike66 at att.net (spike) Date: Thu, 14 Jan 2010 14:29:15 -0800 Subject: [ExI] links only In-Reply-To: <11649444.861021263498891904.JavaMail.defaultUser@defaultHost> References: <11649444.861021263498891904.JavaMail.defaultUser@defaultHost> Message-ID: <8953F80685A441A7A1C1AB1F26B9FCD1@spike> > ...On Behalf Of scerir > Subject: [ExI] links only > > Elvis transhumanist? > ... scerir Excellent! RU Sirius used to post on ExI a long time ago. Anyone here friends with him? Do invite him to drop in. As a huge Elvis fan, I do agree, he was a trendsetter with a one-in-a-billion voice. spike From bbenzai at yahoo.com Thu Jan 14 22:13:15 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 14 Jan 2010 14:13:15 -0800 (PST) Subject: [ExI] atheism In-Reply-To: Message-ID: <934175.30057.qm@web113604.mail.gq1.yahoo.com> Speaking of Atheism, this cracked me up: http://funnyatheism.com/videos/flood-problems Although I must admit, I've never heard of the angel Geoffrey. Ben Zaiboc From lcorbin at rawbw.com Thu Jan 14 23:40:14 2010 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 14 Jan 2010 15:40:14 -0800 Subject: [ExI] Seeing Through New Lenses In-Reply-To: References: <22360fa11001130855h16989e79r4d04ace1c4c050c@mail.gmail.com> Message-ID: <4B4FAB5E.7060302@rawbw.com> Jef writes > Forwarding an article with relevance to the Extropy list. It's ironic > that in a forum dedicated to the topic of "extropy" , which might be > interpreted as increasingly meaningful increasing change, there's such > vocal support for hard rationality, simple truth, and seeing the world > the way way it "really is." I haven't noticed that, lately. But then... that probably supports your contention! :) > Edge Perspectives with John Hagel: Relationships and Dynamics - Seeing > Through New Lenses > > while Westerners are more oriented to reductionist views. This basic > difference plays out in fascinating ways, including the greater > attention by East Asian children to verbs while Western children tend > to learn nouns faster. > > One very tangible illustration of this is a simple test reported by > Nisbett. A developmental psychologist showed three pictures to > children ? a cow, a chicken and some grass. He asked children from > America which two of the pictures belonged together. Most of them > grouped the cow and chicken together because they were both objects in > the same category of animals. Chinese children on the other hand > tended to group the cow and grass together because ?cows eat grass? ? > they focused on the relationship between two objects rather than the > objects themselves. > > [Which of these do YOU prefer? Which of these do you think is closer > to The Truth? Why? - Jef] Naturally, it shouldn't be a matter of preference, per se, but it is interesting which is more salient to one. I along with most westerners (in line with the article's point) would tend to lump the cow and the chicken together "as animals", surrendering to abstract categorization. Have you read Flynn's book "What is intelligence?". In short, he tries to explain the Flynn effect as greater practice (especially among westerners, I suppose) at *decontextualizing*. This process causes abstract categories to come more quickly to mind than before. Hence, people today do better on IQ tests precisely because of their greater familiarity and facility with abstract categories, e.g. Animal, Mineral, or Vegetable. What is amazing about your cite is that it flies very much in the face of this. After all, east Asians are no slouches on IQ tests, and if the tests are being explicitly designed---as Flynn would have us believe ---to measure decontextualization, then this "cows eat grass" answer goes against this supposed insidious designing of the IQ tests. I have no idea how to draw a bottom line to all this, however, except to say that just as many of Tversky and Kahneman's errors arise because humans today find themselves in a very different environment from which they evolved, most of the intellectual tasks demanded of people today (that are relatively new), would also seem to demand the ability to quickly decontextualize. (For example, how many times does the word "of" occur in a given sentence, or how many zeros are there in a given binary string.) Prediction: citified Chinese will go for the categorization in the "cows, chickens, and grass" more than will the countryside Chinese. Flynn has many amusing anecdotes about the way rural people resist answering IQ challenge questions in the hoped-for abstract way, and resort to common sense relationships instead---which actually would include "cows eat grass". Very puzzling. Lee > I found this intriguing in the context of our continuing work at the > Center for the Edge on the Big Shift. As I indicated in a previous > posting, the Big Shift is a movement from a world where value creation > depends on knowledge stocks to one where value resides in knowledge > flows ? in other words, objects versus relationships. Our Western way > of perceiving has been very consistent with a world of knowledge > stocks and short-term transactions. As we move into a world of > knowledge flows, though, I suspect the East Asian focus on > relationships may be a lot more helpful to orient us (no pun > intended). > > ... From stathisp at gmail.com Fri Jan 15 00:59:14 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jan 2010 11:59:14 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <805502.70698.qm@web36505.mail.mud.yahoo.com> References: <805502.70698.qm@web36505.mail.mud.yahoo.com> Message-ID: 2010/1/15 Gordon Swobe : > --- On Thu, 1/14/10, Stathis Papaioannou wrote: > >> You've said that formal programs can't produce >> understanding but physical activity can produce understanding. >> Computers not only run formal programs, they also do physical activity. >> You have a hunch that the sort of physical activity in computers is >> incapable of producing understanding. But a hunch is not good enough in a >> philosophical argument. > > I don't consider it a "hunch". I look at programs (and I write them) and I look at the hardware that implements them (and I work on that too) and I see only syntactical form-based operations. And I understand and agree with those who say syntax cannot give semantics, that grammar cannot give vocabulary. > > It's last point on which we disagree. You want to believe that performing form-based syntactic operations in software or hardware will magically give rise to human-like understanding. I look at minds and I look at the hardware that implements them, and all I see is neurons firing according to mindless rules. I can't say it's obvious to me how this leads to either intelligence or consciousness. The code the brain uses to represent objects in the real world and concepts is much less well understood than the code computers use, but it is a code, and ultimately all codes are arbitrary. Presumably for the brain you don't believe the code or the algorithm implemented by neural networks firing gives rise to understanding, but rather something intrinsic to the matter or the way the matter behaves. So how are computers disadvantaged here? They too use a code and implement algorithms, and they too contain matter engaged in physical activity. -- Stathis Papaioannou From max at maxmore.com Fri Jan 15 06:47:48 2010 From: max at maxmore.com (Max More) Date: Fri, 15 Jan 2010 00:47:48 -0600 Subject: [ExI] atheism Message-ID: <201001150702.o0F72W9Q014431@andromeda.ziaspace.com> Ben Zaiboc wrote: >Speaking of Atheism, this cracked me up: > >http://funnyatheism.com/videos/flood-problems > >Although I must admit, I've never heard of the angel Geoffrey. That was excellent, thanks. I also enjoyed "Gay Scientists Isolate Christian Gene" http://funnyatheism.com/videos/gay-scientists-isolate-christian-gene From gts_2000 at yahoo.com Fri Jan 15 13:06:28 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 15 Jan 2010 05:06:28 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <213685.55239.qm@web36502.mail.mud.yahoo.com> --- On Thu, 1/14/10, Stathis Papaioannou wrote: > Presumably for the brain you don't believe the code or the > algorithm implemented by neural networks firing gives rise > to understanding, If the brain uses code or implements algorithms at all (and it probably does not) then it must do something else besides. The computationalist theory of mind simply fails to explain the facts. Even if we can compute the brain, this would not mean that the brain actually does computations. > but rather something intrinsic to the matter or the way the matter > behaves. Right. The matter in the brain does something more sophisticated than the mere running of programs. Science does not yet understand it well just as it does not yet understand many, many, many things. I think that eventually neuroscience and the philosophy of mind will merge into one field -- that neuroscientists will come to see that they hold in their hands the answers to these questions of philosophy. It has already started if you look with open eyes: neuroscientists have produced antidepressant drugs that brighten mood, a quality of consciousness, and drugs that alleviate pain, another quality of consciousness, and so on and so on. People would understand this obvious link between science and philosophy except that they're still unwitting slaves to the vocabulary and concepts of mind/matter duality left over from the time of Descartes. People cannot see what should be blindingly obvious: that consciousness exists as part of the physical world, as a high level process of the physical brain. > So how are computers disadvantaged here? They can't get semantics from syntax anymore than you can. -gts From stefano.vaj at gmail.com Fri Jan 15 14:34:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 15 Jan 2010 15:34:41 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <213685.55239.qm@web36502.mail.mud.yahoo.com> References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> 2010/1/15 Gordon Swobe : > If the brain uses code or implements algorithms at all (and it probably does not) then it must do something else besides. It may surprise you, but I have implemented in my own brain at a very young age the algorithm allowing me to multiply integers of arbitrary length. But perhaps I am not really computing numbers at all, it's all an illusion, the solution actually gets communicated to me from another dimension through some unknown form of quantum-based communication mechanism... :-) -- Stefano Vaj From gts_2000 at yahoo.com Fri Jan 15 15:32:04 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 15 Jan 2010 07:32:04 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> Message-ID: <129394.58494.qm@web36507.mail.mud.yahoo.com> --- On Fri, 1/15/10, Stefano Vaj wrote: > It may surprise you, but I have implemented in my own brain > at a very young age the algorithm allowing me to multiply integers of > arbitrary length. Nobody questions that we can intentionally run algorithms in our brains. But how can you come to know the meanings of symbols merely by virtue of running programs in your mind that manipulate symbols according to syntactic rules as computer programs actually do? You can't. Less easy to see is that your computer calculates the answers to mathematical questions without understanding the meanings of the numbers. I think you'll agree that as you count to ten on your fingers, your mind but not your fingers understand the numbers. We invented calculators because we needed to count to eleven. They don't understand numbers either. -gts From aware at awareresearch.com Fri Jan 15 16:13:23 2010 From: aware at awareresearch.com (Aware) Date: Fri, 15 Jan 2010 08:13:23 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: <129394.58494.qm@web36507.mail.mud.yahoo.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> Message-ID: On Fri, Jan 15, 2010 at 7:32 AM, Gordon Swobe wrote: > --- On Fri, 1/15/10, Stefano Vaj wrote: > >> It may surprise you, but I have implemented in my own brain >> at a very young age the algorithm allowing me to multiply integers of >> arbitrary length. > > Nobody questions that we can intentionally run algorithms in our brains. But how can you come to know the meanings of symbols merely by virtue of running programs in your mind that manipulate symbols according to syntactic rules as computer programs actually do? You can't. > > Less easy to see is that your computer calculates the answers to mathematical questions without understanding the meanings of the numbers. It's funny in a meta way that all this discussion undoubtedly reflects the true nature of the participants, but represents no "true understanding" at this level of the system either. Gordon, I'll say it again: What you're you're seeking to grasp is not ontological; it's epistemological. - Jef From stefano.vaj at gmail.com Fri Jan 15 17:27:20 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 15 Jan 2010 18:27:20 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <129394.58494.qm@web36507.mail.mud.yahoo.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> Message-ID: <580930c21001150927o3db292c5n7c5428669adc81e1@mail.gmail.com> 2010/1/15 Gordon Swobe : > Nobody questions that we can intentionally run algorithms in our brains. But how can you come to know the meanings of symbols merely by virtue of running programs in your mind that manipulate symbols according to syntactic rules as computer programs actually do? You can't. The "meaning" of a symbol is nothing else than its association with another symbol (say, the 'meaning assigned to "x" in this equation is "3"'). What is there in such a trivial transaction which would not be "algorithmic"? :-/ -- Stefano Vaj From jonkc at bellsouth.net Fri Jan 15 18:01:53 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 15 Jan 2010 13:01:53 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <296296.66105.qm@web113614.mail.gq1.yahoo.com> References: <296296.66105.qm@web113614.mail.gq1.yahoo.com> Message-ID: <34B0964D-C0F6-4C70-8AFB-CFE1DA575B9A@bellsouth.net> Gordon Swobe : > Here's the classic one-line program: > > print "Hello World" The title of this thread is "Meaningless Symbols", if "print" was one of those to the computer then it would not do exactly the same thing each time it encountered that symbol, instead it would do some arbitrary thing. Apparently the computer ascribed meaning to at least one of those "meaningless symbols". John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at m056832107.syzygy.com Fri Jan 15 18:23:13 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 15 Jan 2010 18:23:13 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <129394.58494.qm@web36507.mail.mud.yahoo.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> Message-ID: <20100115182313.5.qmail@syzygy.com> Gordon writes: >Less easy to see is that your computer calculates the answers to > mathematical questions without understanding the meanings of the > numbers. My computer certainly doesn't understand the beauty of clouds clinging to a mountainside, even if I'm using it to process a photograph of those clouds. My computer does understand numbers, though. That understanding is hardwired into adder circuits in the CPU. The fact that it comes up with the correct answers for the following: 0 + 0 = 0 1 + 1 = 2 indicates that it understands the fundamental difference between the numbers zero and one. The essential zeroness and oneness are captured in the behavior of addition. The syntax of those two statements above is identical, but the fact that the first and last symbols in the first line must be the same results from the zeroness of the first symbol. I think what you keep coming back to is the concept of "qualia". You say that simulated water doesn't get your computer wet. Well, that's just blindingly obvious, but I think what you're really concerned with is where the wetness quale is. When you talk of "symbol grounding", you are asking about the qualia for those symbols. When you talk of the neural correlates of consciousness, you're asking what qualia are built out of. When you assert that syntax can never create semantics, you're really asserting that syntax cannot create qualia. Does this sound right to you? Well, I'd like to assert that qualia are symbols, just like the other symbols that your brain manipulates. The redness quale may not be the same symbol as the word red, but they're closely associated. You can say "I am seeing red" when the redness quale symbol is active. There's nothing particularly mysterious about qualia symbols. When red light hits your retina, a pattern of neural firing occurs in your brain, part of which represents the red quale symbol. Just like other symbols in your brain, qualia can be represented as data and moved to other substrates. We don't yet know how the brain encodes these symbols, but we can be fairly confident that that encoding is represented by the synaptic connections between neurons, and not by ATP molecules in mitochondria. -eric From thespike at satx.rr.com Fri Jan 15 18:43:48 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jan 2010 12:43:48 -0600 Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100115182313.5.qmail@syzygy.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> Message-ID: <4B50B764.8010003@satx.rr.com> On 1/15/2010 12:23 PM, Eric Messick wrote: > My computer does understand numbers, though. > > That understanding is hardwired into adder circuits in the CPU. > > The fact that it comes up with the correct answers for the following: > > 0 + 0 = 0 > 1 + 1 = 2 > > indicates that it understands the fundamental difference between the > numbers zero and one. I just poured 3 cups of water into a 2 cup jar. Does the fact that it stopped accepting water after I'd put in 2 cups and overflowed the rest mean it *understands* 3>2? Then I put a 1 foot rule next to a book and the 9 matched up with the top of the book. Did the rule *understand* how tall the book is? Computer programs understand nothing more than that. This all reminds me of the behaviorist idiocy of the 1950s. Damien Broderick From eric at m056832107.syzygy.com Fri Jan 15 19:38:45 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 15 Jan 2010 19:38:45 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B50B764.8010003@satx.rr.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> Message-ID: <20100115193845.5.qmail@syzygy.com> Damien writes: >I just poured 3 cups of water into a 2 cup jar. Does the fact that it >stopped accepting water after I'd put in 2 cups and overflowed the rest >mean it *understands* 3>2? Then I put a 1 foot rule next to a book and >the 9 matched up with the top of the book. Did the rule *understand* how >tall the book is? No, I wouldn't attribute understanding to those objects. You're not describing the manipulation of symbols here, but of physical objects. Understanding of symbols is a symbolic operation. > Computer programs understand nothing more than that. I'll agree that understanding of zeroness and oneness is a very basic thing. The adder that encodes that understanding is a very simple circuit, so it's level of understanding must be very simple. Your brain is much more complicated, so it can understand much more complicated things. >This all reminds me of the behaviorist idiocy of the 1950s. Sounds like you've got a problem with behaviorist descriptions. Can you explain? -eric From thespike at satx.rr.com Fri Jan 15 19:49:08 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jan 2010 13:49:08 -0600 Subject: [ExI] Meaningless Symbols. In-Reply-To: <20100115193845.5.qmail@syzygy.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> <20100115193845.5.qmail@syzygy.com> Message-ID: <4B50C6B4.1010404@satx.rr.com> On 1/15/2010 1:38 PM, Eric Messick wrote: > Sounds like you've got a problem with behaviorist descriptions. Can > you explain? I don't have to. Chomsky did it in 1959 when he killed Skinner with a single review. [reprinted http://www.chomsky.info/articles/1967----.htm ] Damien Broderick From possiblepaths2050 at gmail.com Fri Jan 15 19:50:07 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 15 Jan 2010 12:50:07 -0700 Subject: [ExI] Overposting to List (RE: Meaningless Symbols) In-Reply-To: References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> Message-ID: <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> Hi Natasha, I don't understand about the warning. I have hardly posted at all over the last month or so. And so I plead innocent! hee But I'm sure I will offend down the road. ; ) Oh, have you seen "The Singularity is Near" movie yet, or "The Transcendant Man" documentary? What did you think of them?? Oh, and did you like Avater? I admit it felt like Dances with Wolves in Space, but I don't care because it still enthralled me with the wonder and romance of a strange world. I did think that for the mid 22nd century the military technology of humanity was a joke. But hey, it's a minor quibble. A plotline having a military nano utility fog devouring the poor beleagered aliens within 2 seconds flat would not have been very sporting... I'm playing with the idea (if things work out...) of relocating to NYC to take over my father's $325 two bedroom rent control apt. (two miles away from the site of the WTC). I am not in love with the idea of living in the Big Apple but I suppose there is lots of opportunity there. I admit to loving the cultural/nightlife activities of the Phoenix Valley, it's just the hot climate (and *mindlessly* conservative political climate...) that I hate. And here women roll their eyes at me because I don't have a car (Phoenix is totally a car place!). As I understand it, cars are very optional for even middle class people in NYC, due to the massive parking headache. Punch Max in the arm for me! : ) Warm wishes, John 2010/1/14 Natasha Vita-More > John, please be careful about over posting. > > Thank you, > Natasha > > > [image: Nlogo1.tif] Natasha Vita-More > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From thespike at satx.rr.com Fri Jan 15 20:09:21 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jan 2010 14:09:21 -0600 Subject: [ExI] NY NY it's a wonderful town In-Reply-To: <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> Message-ID: <4B50CB71.5090907@satx.rr.com> On 1/15/2010 1:50 PM, John Grigg wrote: > I'm playing with the idea (if things work out...) of relocating to NYC > to take over my father's $325 two bedroom rent control apt. (two miles > away from the site of the WTC). I am not in love with the idea of > living in the Big Apple Are you insane? Hey, I'll take it off your hands... :) Damien Broderick From mlatorra at gmail.com Fri Jan 15 21:44:47 2010 From: mlatorra at gmail.com (Michael LaTorra) Date: Fri, 15 Jan 2010 14:44:47 -0700 Subject: [ExI] NY NY it's a wonderful town In-Reply-To: <4B50CB71.5090907@satx.rr.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> <4B50CB71.5090907@satx.rr.com> Message-ID: <9ff585551001151344n7cc79ac4k4627a515e83228ab@mail.gmail.com> John, if you do this, sub-let [rent] the other bedroom. If you're in a good neighborhood, it's worth far more than the rent you're paying. For example: 2 Bedrooms $1700 828 sq. ft.1 Bath(s) http://new-york-city.apartmenthomeliving.com/apartments-for-rent/2-bedroom/from-600 New York is a great city. Great and terrible. Extremely expensive. Extremely exciting. I was born there. Best of luck! Mike LaTorra On Fri, Jan 15, 2010 at 1:09 PM, Damien Broderick wrote: > On 1/15/2010 1:50 PM, John Grigg wrote: > > I'm playing with the idea (if things work out...) of relocating to NYC >> to take over my father's $325 two bedroom rent control apt. (two miles >> away from the site of the WTC). I am not in love with the idea of >> living in the Big Apple >> > > Are you insane? Hey, I'll take it off your hands... :) > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Fri Jan 15 21:45:27 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 15 Jan 2010 16:45:27 -0500 Subject: [ExI] Overposting to List (RE: Meaningless Symbols) In-Reply-To: <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> Message-ID: <20100115164527.dfvm9n1k0gwo88wc@webmail.natasha.cc> It is John Clark who overposted, not you. > Oh, have you seen "The Singularity is Near" movie yet, or "The Transcendant > Man" documentary? What did you think of them?? No. I heard they weren't well done. > Oh, and did you like > Avater? Will be seeing Avatar very soon! Natasha From natasha at natasha.cc Fri Jan 15 21:48:28 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 15 Jan 2010 16:48:28 -0500 Subject: [ExI] NY NY it's a wonderful town In-Reply-To: <9ff585551001151344n7cc79ac4k4627a515e83228ab@mail.gmail.com> References: <331623.31673.qm@web36501.mail.mud.yahoo.com> <08CDF321-0409-4F75-AECD-AC37F1598EAB@bellsouth.net> <36AD8054-048C-4C12-A789-F3BF1C2B7088@bellsouth.net> <2d6187671001151150w376f782bia62dddeba2a45034@mail.gmail.com> <4B50CB71.5090907@satx.rr.com> <9ff585551001151344n7cc79ac4k4627a515e83228ab@mail.gmail.com> Message-ID: <20100115164828.9f38bvr1cgocs0wo@webmail.natasha.cc> I was born there too! I love NY! Quoting Michael LaTorra : > John, if you do this, sub-let [rent] the other bedroom. If you're in a good > neighborhood, it's worth far more than the rent you're paying. For example: > > 2 Bedrooms $1700 828 sq. ft.1 Bath(s) > http://new-york-city.apartmenthomeliving.com/apartments-for-rent/2-bedroom/from-600 > > New York is a great city. Great and terrible. Extremely expensive. Extremely > exciting. I was born there. > > Best of luck! > > Mike LaTorra > > On Fri, Jan 15, 2010 at 1:09 PM, Damien Broderick > wrote: > >> On 1/15/2010 1:50 PM, John Grigg wrote: >> >> I'm playing with the idea (if things work out...) of relocating to NYC >>> to take over my father's $325 two bedroom rent control apt. (two miles >>> away from the site of the WTC). I am not in love with the idea of >>> living in the Big Apple >>> >> >> Are you insane? Hey, I'll take it off your hands... :) >> >> Damien Broderick >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > From eric at m056832107.syzygy.com Fri Jan 15 22:16:55 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 15 Jan 2010 22:16:55 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B50C6B4.1010404@satx.rr.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> <20100115193845.5.qmail@syzygy.com> <4B50C6B4.1010404@satx.rr.com> Message-ID: <20100115221655.5.qmail@syzygy.com> Damien writes: >On 1/15/2010 1:38 PM, Eric Messick wrote: > >> Sounds like you've got a problem with behaviorist descriptions. Can >> you explain? > >I don't have to. Chomsky did it in 1959 when he killed Skinner with a >single review. > >[reprinted http://www.chomsky.info/articles/1967----.htm ] I haven't seen the book Chomsky is reviewing, and I only skimmed the review, but I saw no reason to disagree with Chomsky. Chomsky points out that Skinner is trying to stretch the results of simple scientific experiments on rats and pigeons to cover linguistic activity of humans, and that Skinner looses specificity by doing this. Chomsky also gives the impression that Skinner is ignoring much of the internal state of a complex organism, again resulting in a vague lack of specificity: Chomsky: One would naturally expect that prediction of the behavior of a complex organism (or machine) would require, in addition to information about external stimulation, knowledge of the internal structure of the organism, the ways in which it processes input information and organizes its own behavior. I'm not sure how my discussion of a CPU adder understanding the difference between zeroness and oneness prompted your remark about behaviorism. I'd say I was a behaviorist only in the weak sense that our access to the internal state of a brain is only currently available through observing behavior. I can't imagine that Skinner would claim that the previous life history of a complex organism had no bearing on current complex behavior of that organism, but Chomsky seems to be implying that. In any case, I think that the internal state built up by life experience is crucial to complex behavior, even if simple behavior can be successfully molded by simple conditioning, as Skinner's experiments show. So: The meaning of a symbol in a brain is encoded in the interconnections of the neurons which activate when that symbol is active. We can probe the meaning of a symbol by observing the behavior of the processor. Processing elements in a CPU are connected such that certain symbols mean zero and one. We can probe that meaning by observing how the CPU adds numbers. Behavior at this level is simple enough that we're still in the realm where Chomsky wouldn't be criticizing Skinner about specificity. -eric From scerir at libero.it Sat Jan 16 00:03:28 2010 From: scerir at libero.it (scerir) Date: Sat, 16 Jan 2010 01:03:28 +0100 (CET) Subject: [ExI] Meaningless Symbols. Message-ID: <30151309.987061263600208512.JavaMail.defaultUser@defaultHost> >> My computer does understand numbers, though. There is some hidden semantics, in my calculator too, and in any calculator I must say. Possibly the signature of some Great Programmer? Punch any three digits into your calculator. Then punch in the same three again. No matter which digits you choose, the resulting six-digit number will be exactly divisible by 13, that result divisible by 11, and the last result by 7. (And, of course, you will end up with the same three-digit number you started with.) From spike66 at att.net Fri Jan 15 23:57:39 2010 From: spike66 at att.net (spike) Date: Fri, 15 Jan 2010 15:57:39 -0800 Subject: [ExI] have you ever seen anything like this? Message-ID: <66893462A8AB44BA8E07C37D33BAC07D@spike> I have heard of humans who get caught up in the emotion of the battle, suicidal rage etc, but I don't think I have ever seen it in any other beast. Here a woodpecker keeps coming back to fight, clearly not for self defense or with any hope of actually devouring the serpent, but rather just to injure or slay it: http://www.youtube.com/watch?v=14yxYTOdL38 spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Jan 16 00:28:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 15 Jan 2010 16:28:49 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: <580930c21001150927o3db292c5n7c5428669adc81e1@mail.gmail.com> Message-ID: <417975.19906.qm@web36503.mail.mud.yahoo.com> Stefano, If you watch a small child solve a math problem on his fingers then you will watch a fellow human use a simple calculator to facilitate and extend his mathematical understanding. The understanding of the numbers takes place in his mind, not in his fingers. A few years later his parents will buy him a pocket calculator. If the child thinks clearly then he will understand how his new battery-powered gizmo works in the same way his fingers once did: as a tool for facilitating and extending his own mathematical understanding. If the child cannot think clearly then he may find himself believing that something inside his calculator has a mental life capable of understanding mathematics. Presumably that conscious entity lives in the microchip. It goes away when the batteries die, but comes back if the boy plugs in the AC adapter. That little mind inside the microchip doesn't have much personality, but boy it sure is a whiz at doing math. -gts From gts_2000 at yahoo.com Sat Jan 16 02:02:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 15 Jan 2010 18:02:21 -0800 (PST) Subject: [ExI] Meaningless Symbols. Message-ID: <171918.49385.qm@web36504.mail.mud.yahoo.com> --- On Fri, 1/15/10, John Clark wrote: >> print "Hello World" > > The title of this thread is "Meaningless Symbols", if "print" was one of > those to the computer then it would not do exactly the same thing each > time it encountered that symbol, instead it would do some > arbitrary thing. Apparently the computer ascribed meaning to > at least one of those "meaningless symbols". You've stumbled close to the truth there, John. As I pointed out in my original message, "print" counts as a syntactic rule. Although it stretches the definition of "understanding" I can for the sake of argument agree that s/h systems mechanically understand syntax. They cannot however get semantics from their so-called understanding of syntax. Nor can humans get semantics from syntax, for that matter, and humans really do understand syntax. I mentioned also that the classic one-line "Hello World" program does not differ in any important philosophical way from the most sophisticated possible program. Someone made some scornful and ignorant comment about that. Let us say that we have a sophisticated program that behaves in every way like a human such that it passes the Turing test. We then add to that program the line 'print "Hello World"' (or perhaps 'speak "Hello World"') such that the command will execute at an appropriate time still consistent with passing the Turing test. That advanced program will not understand the meaning of "Hello World" any more than does the one line program running alone. S/H systems can do more than follow syntactic rules for crunching words and symbols. They have no way to attach meanings to the symbols or to understand those meanings. Those semantic functions belong to the humans who program and operate them. -gts From stathisp at gmail.com Sat Jan 16 02:13:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jan 2010 13:13:09 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <213685.55239.qm@web36502.mail.mud.yahoo.com> References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/16 Gordon Swobe : > --- On Thu, 1/14/10, Stathis Papaioannou wrote: > >> Presumably for the brain you don't believe the code or the >> algorithm implemented by neural networks firing gives rise >> to understanding, > > If the brain uses code or implements algorithms at all (and it probably does not) then it must do something else besides. The computationalist theory of mind simply fails to explain the facts. The primary visual cortex (V1) is isomorphic with the visual field. Certain neurons fire when you see a vertical line and different neurons fire when you see a horizontal line. The basic pattern information is passed on to deeper layers of the cortex, V2-V5, where it is processed further and gives rise to perception of more complex visual phenomena, such as perception of foreground moving on background and face recognition. The individual neurons follow relatively simple rules determining when they fire, but the network of neurons behaves in a complex way due to the complex interconnections. The purpose of the internal machinery of the neuron is to ensure that it behaves appropriately in response to input from other neurons. The important point here is that the neuron follows an algorithm which has no hint in it of visual perception. If you replaced parts of the neuron with artificial components that left the algorithm unchanged, the neuron would function normally and the subject's perception would be normal. It wouldn't matter what the artificial components were made of as long as the neuron behaved normally; and as discussed this is true with the strength of logical necessity, unless you are willing to entertain what I consider an incoherent notion of consciousness. Moreover, it is the pattern of interconnected neurons firing that is necessary and sufficient for the person's behaviour, so if consciousness is something over and above this it would seem to be completely superfluous. But if you still think that the consciousness of the brain resides in the actual matter of the neurons rather than their function, then you could consistently maintain that it resides in the matter of the modified neurons provided that they still functioned normally. -- Stathis Papaioannou From stathisp at gmail.com Sat Jan 16 02:42:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jan 2010 13:42:27 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <213685.55239.qm@web36502.mail.mud.yahoo.com> References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/16 Gordon Swobe : > I think that eventually neuroscience and the philosophy of mind will merge into one field -- that neuroscientists will come to see that they hold in their hands the answers to these questions of philosophy. When neuroscientists make a working model of a brain and claim that, since it behaves like a real brain it must also have the mind of a real brain, there will be the doubters. The neuroscientists will stamp their feet and point to their experimental results but the doubters will still doubt, as there is no possible empirical fact that will convince them. Therefore, it will by definition always remain a philosophical question. > It has already started if you look with open eyes: neuroscientists have produced antidepressant drugs that brighten mood, a quality of consciousness, and drugs that alleviate pain, another quality of consciousness, and so on and so on. The drugs can do this only by affecting the behaviour of neurons. What you claim is that it is possible to make a physical change to a neuron which leaves its behaviour unchanged but changes or eliminates the person's consciousness. -- Stathis Papaioannou From lacertilian at gmail.com Sat Jan 16 03:08:47 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 15 Jan 2010 19:08:47 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: Regarding antidepressants and other mind-altering drugs: I'd like to add that simply because we have a drug which *produces* happiness, this does not necessarily mean anyone actually understands why that is yet. Gordon and Stathis seem to be implying otherwise. A quick Google search for "history of antidepressants" turns up the following: http://web.grinnell.edu/courses/sst/f01/SST395-01/PublicPages/PerfectDrugs/Chris/history/index2.html Four sentences in, and sure enough the telltale term "accidental discovery" appears. No one knows what happens beyond the blood-brain barrier. There aren't even that many convincing *theories, *as far as I can tell, and even fewer subject to scientific testing. Incidentally, this is my first message to Extropy-Chat. I probably screwed up the format somehow. If any hardened Gmail users want to point out my inevitable mistakes, I'd be much obliged. On Fri, Jan 15, 2010 at 6:42 PM, Stathis Papaioannou wrote: > 2010/1/16 Gordon Swobe : > > > I think that eventually neuroscience and the philosophy of mind will > merge into one field -- that neuroscientists will come to see that they hold > in their hands the answers to these questions of philosophy. > > When neuroscientists make a working model of a brain and claim that, > since it behaves like a real brain it must also have the mind of a > real brain, there will be the doubters. The neuroscientists will stamp > their feet and point to their experimental results but the doubters > will still doubt, as there is no possible empirical fact that will > convince them. Therefore, it will by definition always remain a > philosophical question. > > > It has already started if you look with open eyes: neuroscientists have > produced antidepressant drugs that brighten mood, a quality of > consciousness, and drugs that alleviate pain, another quality of > consciousness, and so on and so on. > > The drugs can do this only by affecting the behaviour of neurons. What > you claim is that it is possible to make a physical change to a neuron > which leaves its behaviour unchanged but changes or eliminates the > person's consciousness. > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 16 06:13:02 2010 From: spike66 at att.net (spike) Date: Fri, 15 Jan 2010 22:13:02 -0800 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: <8F502A3F7FAE466AA1BE89EDD3419908@spike> On Behalf Of Spencer Campbell ... If any hardened Gmail users want to point out my inevitable mistakes, I'd be much obliged... Spencer Hardened Gmail users. {8^D Haaa I like that. Welcome Spencer! {8-] spike From pharos at gmail.com Sat Jan 16 10:24:08 2010 From: pharos at gmail.com (BillK) Date: Sat, 16 Jan 2010 10:24:08 +0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: On 1/16/10, Spencer Campbell wrote: > Incidentally, this is my first message to Extropy-Chat. I probably screwed > up the format somehow. If any hardened Gmail users want to point out my > inevitable mistakes, I'd be much obliged. > Hi Your mail was fine for other Gmail users. But, (there's always a but) :) the Mailman list archives and other mail systems prefer messages in Plain text (i.e.not HTML). It is recommended to trim the message you are replying to down to only the portion that you are commenting on and place your reply after those sentences. (This avoids messages that grow like Topsy, ever-increasing, until megabytes of unread data are being sent). Best wishes, BillK From stathisp at gmail.com Sat Jan 16 12:36:34 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jan 2010 23:36:34 +1100 Subject: [ExI] Meaningless Symbols. In-Reply-To: References: <213685.55239.qm@web36502.mail.mud.yahoo.com> Message-ID: 2010/1/16 Spencer Campbell : > Regarding antidepressants and other mind-altering drugs: I'd like to add > that simply because we have a drug which produces happiness, this does not > necessarily mean anyone actually understands why that is yet. Gordon and > Stathis seem to be implying otherwise. > A quick Google search for "history of antidepressants" turns up the > following: > http://web.grinnell.edu/courses/sst/f01/SST395-01/PublicPages/PerfectDrugs/Chris/history/index2.html > Four sentences in, and sure enough the telltale term "accidental discovery" > appears. No one knows what happens beyond the blood-brain barrier. There > aren't even that many convincing theories, as far as I can tell,?and even > fewer subject to scientific testing. We do know what most antidepressant drugs do, insofar as we know what receptors they affect and how exactly they affect them. What we don't know is why this should have an effect on mood. It's a similar story with other psychoactive drugs. The other point to make about antidepressants is that they don't work very well: they work in about 30% of patients who have clinical depression, and not at all in people who are simply unhappy. Clinical trials show a 60% efficacy, but the placebo has at least 30% efficacy in these trials. -- Stathis Papaioannou From nebathenemi at yahoo.co.uk Sat Jan 16 12:36:38 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sat, 16 Jan 2010 12:36:38 +0000 (GMT) Subject: [ExI] University degrees (in response to Emlyn) In-Reply-To: Message-ID: <889094.11821.qm@web27003.mail.ukl.yahoo.com> Sorry for not replying earlier in the week, I've had a tough time in a dead-end job (which will become relevant later). On tuesday, Emlyn wrote a long post about Jaron Lanier's new book, and Emlyn talked about modern technological upheavals. I was particularly interested by the following paragraph: "(Next on the chopping block: Universities, whose cash cow, the undergrad degree, will be replaced with cheap/free alternative, and scientific journals, which are much better suited by free online resources if the users can just escape the reputation network effect of the existing closed journals)" If only. Back in the time of Socrates, the finest Sophists could command 100 minas of gold for a course of education designed to broaden the mind of the city's finest young men. 60 minas = 1 talent, or 100 librum (pounds) of gold to those thinking in roman units, so 100 minas is 166 pounds of gold - maybe a million dollars in the current gold bubble, certainly a few hundred thousand for most of the past decade - makes an Ivy League education look cheap. Socrates taught his philosophy while drinking, and the price of admission was being able to keep up with the master's legendary wine consumption. Socrates was disdainful of charging to teach philosophy, but others charged what the market would bear. In our modern age, where colossal numbers of books are available and huge amounts of information online, can we educate ourselves easily? Or do we once again find that we only value what is paid for? The undergraduate degree itself may serve several purposes, including improving one's mental abilities, improving your career prospects, and preparing you for specific tasks such as research in a particular field. The first is hard to put a price on, but the second is a real stumbling block. Many jobs in well-paid or interesting fields now ask for degrees, preferably in a specific field. If we can provide a low-cost online alternative to the university-based degrees, will employers still value it the same or will prejudice place your new education at a lower level than the old-fashioned one? We already have distance-learning degrees, and some employers take them just fine and others are prejudiced. I'm wondering how well any new attempts to reshape higher education will work. Emlyn's post also talked about job losses due to technology, saying "The only real threatened jobs are where people are doing low value crap. Padding. High value stuff will remain." Well, my well-paid job doing reasonably high-value stuff in insurance disappeared with the start of our current economic problems back in 2007. I blew my savings while unemployed, and have done dead-end jobs (office temping, call-centre work) since because it beats welfare. I have a deep need to retrain and find a new career, but my old degree doesn't count for much. To get into many jobs paying more than the dead-end ones, I would need qualifications in a specific area. These qualifications cost money, but I can't afford to pay for more qualifications without getting into debt - unfortunately in a dead-end job it's hard to get a loan at a less-than-punishing rate right now. So, it seems our current system for keeping people fed and housed (or paid in some manner) and trying to harness their talents into work that keeps the country going (or "meaningful employment") is flawed. Also, the system for educating minds to do theoretical work isn't great - for theory work, you needs minds that understand the field, access to the information of what has already been done, and plenty of time. We could employ plenty of intelligent but otherwise underemployed people this way, but no-one's found a cheap enough way of imparting educations and offering access to journals. I could go on longer about this, but I need to get back to finding a less stressful source of above-welfare-level income. Tom From bbenzai at yahoo.com Sat Jan 16 13:03:02 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 16 Jan 2010 05:03:02 -0800 (PST) Subject: [ExI] Meaningless Symbols In-Reply-To: Message-ID: <724270.2104.qm@web113618.mail.gq1.yahoo.com> Gordon Swobe > I mentioned also that the classic one-line "Hello World" > program does not differ in any important philosophical way > from the most sophisticated possible program. Someone made > some scornful and ignorant comment about that. 'Someone' said that it was one of the most ridiculous things they had ever heard. Scornful? certainly, and justifiably so. Ignorant? Only if you consider it ignorant to point out the ridiculousness of claiming that a hydrogen atom doesn't differ in any important way from a solar system, or that the operation of a spinal reflex doesn't differ in any important way from the functioning of a human brain. As has already been pointed out, putting enough simple things together often (and perhaps inevitably) results in a completely different, complex thing, with completely different properties, which as far as we can tell, are not predictable from the properties of the simple things. To claim that the complex thing does not differ in any important way from the simple thing is, I'll say it again, totally ridiculous. Ben Zaiboc From pharos at gmail.com Sat Jan 16 14:56:50 2010 From: pharos at gmail.com (BillK) Date: Sat, 16 Jan 2010 14:56:50 +0000 Subject: [ExI] University degrees (in response to Emlyn) In-Reply-To: <889094.11821.qm@web27003.mail.ukl.yahoo.com> References: <889094.11821.qm@web27003.mail.ukl.yahoo.com> Message-ID: On 1/16/10, Tom Nowell wrote: > So, it seems our current system for keeping people fed and housed > (or paid in some manner) and trying to harness their talents into work > that keeps the country going (or "meaningful employment") is flawed. > Also, the system for educating minds to do theoretical work isn't great > - for theory work, you needs minds that understand the field, access to > the information of what has already been done, and plenty of time. > We could employ plenty of intelligent but otherwise underemployed > people this way, but no-one's found a cheap enough way of imparting > educations and offering access to journals. > > You find yourself at the sharp end of the looting of the US capitalist system. Companies are stripped bare, then closed down or moved to China. Wealth and income is concentrated into a smaller and smaller percentage of the people. These super-wealthy few have now used their wealth to take over Congress and the Fed and are now looting the US treasury. Unemployment and food stamps for all is a minor side-effect. Obama was supposed to change all this, but he faces a bought Congress controlled by lobbyists. What are the US people to do? Voting the Republicans back in won't change anything. BillK From jonkc at bellsouth.net Sat Jan 16 17:06:52 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 16 Jan 2010 12:06:52 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <4B50B764.8010003@satx.rr.com> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> Message-ID: <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> On Jan 15, 2010, Damien Broderick wrote: > I just poured 3 cups of water into a 2 cup jar. Does the fact that it stopped accepting water after I'd put in 2 cups and overflowed the rest mean it *understands* 3>2? Yes. > Then I put a 1 foot rule next to a book and the 9 matched up with the top of the book. Did the rule *understand* how tall the book is? Yes. > Computer programs understand nothing more than that. So what? That's enough understanding to work with; embarrassingly too much actually. Gordon thinks that genuine understanding is a completely useless property for intelligent people or machines to have because they would continue to act in exactly the same way whether they have understanding or not. Apparently you believe the same thing; nevertheless for reasons never explained Evolution invented understanding long ago and even more bizarrely saw fit to retain it over hundreds of millions of years. Or at least that's what Gordon claims to have happened because on at least some occasions he says he understands things, you may have made similar assertions in the past. > This all reminds me of the behaviorist idiocy of the 1950s. Given the above you may not be in the strongest position to call anything idiocy. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Jan 16 17:23:17 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 16 Jan 2010 18:23:17 +0100 Subject: [ExI] Meaningless Symbols. In-Reply-To: <417975.19906.qm@web36503.mail.mud.yahoo.com> References: <580930c21001150927o3db292c5n7c5428669adc81e1@mail.gmail.com> <417975.19906.qm@web36503.mail.mud.yahoo.com> Message-ID: <580930c21001160923t448b791ch36874c715407842a@mail.gmail.com> 2010/1/16 Gordon Swobe : > If you watch a small child solve a math problem on his fingers then you will watch a fellow human use a simple calculator to facilitate and extend his mathematical understanding. The understanding of the numbers takes place in his mind, not in his fingers. Yes. And association of symbols such as x=3, y=5, then x+y=7 takes place in the CPU, not in the Cd-Rom drive. So? -- Stefano Vaj From stefano.vaj at gmail.com Sat Jan 16 17:27:28 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 16 Jan 2010 18:27:28 +0100 Subject: [ExI] Meaningless Symbols In-Reply-To: <724270.2104.qm@web113618.mail.gq1.yahoo.com> References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> Message-ID: <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> 2010/1/16 Ben Zaiboc : > To claim that the complex thing does not differ in any important way from the simple thing is, I'll say it again, totally ridiculous. Yes and no. It appears that there is one single qualitative threshold (and a pretty low one...) as far as "complexity" is concerned. Beyond that, all conceivable degrees of complexity can be generated by systems having attained the required level, and all that changes is the performances in the completion of a given computation. See again a New Kind of Science. -- Stefano Vaj From gts_2000 at yahoo.com Sat Jan 16 17:41:32 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 16 Jan 2010 09:41:32 -0800 (PST) Subject: [ExI] Meaningless Symbols. In-Reply-To: Message-ID: <554821.71139.qm@web36502.mail.mud.yahoo.com> --- On Fri, 1/15/10, Stathis Papaioannou wrote: > The drugs can do this only by affecting the behaviour of > neurons. What you claim is that it is possible to make a physical > change to a neuron which leaves its behaviour unchanged but changes or > eliminates the person's consciousness. You keep assigning absolute atomic status to neurons and their behaviors, forgetting that just as the brain is made of neurons, neurons are made of objects too. Those intra-neuronal objects have as much right to claim atomic status as does the neuron, and larger inter-neuronal structures can also make that claim. And on top of that you assume that digital simulations of whatever structures you arbitrarily designate as atomic will in fact work exactly like the supposed atomic structures you hope to simulate -- which presupposes that the brain in actual fact exists as a digital computer. -gts From jonkc at bellsouth.net Sat Jan 16 17:53:50 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 16 Jan 2010 12:53:50 -0500 Subject: [ExI] Meaningless Symbols. In-Reply-To: <171918.49385.qm@web36504.mail.mud.yahoo.com> References: <171918.49385.qm@web36504.mail.mud.yahoo.com> Message-ID: On Jan 15, 2010, at 9:02 PM, Gordon Swobe wrote: > > As I pointed out in my original message, "print" counts as a syntactic rule. Putting things in inflexible little boxes called "syntax" and "semantics" is an entirely human invention, nature doesn't make any such rigid distinctions. In the DNA code CAU means put the amino acid Histidine right here, and I don't care if thats syntax or semantics it created you. > Although it stretches the definition of "understanding" I can for the sake of argument agree that s/h systems mechanically understand syntax. They cannot however get semantics from their so-called understanding of syntax. You have been saying that for over a month now, you have found many new ways to express the same statement but you have yet to give us one reason to think it is true, and you have ignored the many reasons offered to think it is not true. > Let us say that we have a sophisticated program that behaves in every way like a human such that it passes the Turing test. We then add to that program the line 'print "Hello World"' (or perhaps 'speak "Hello World"') such that the command will execute at an appropriate time still consistent with passing the Turing test. That advanced program will not understand the meaning of "Hello World" any more than does the one line program running alone. You have been saying that for over a month now, you have found many new ways to express the same statement but you have yet to give us one reason to think it is true, and you have ignored the many reasons offered to think it is not true. > Nor can humans get semantics from syntax, for that matter, and humans really do understand syntax. What a remarkably silly thing to say! If that were true why would people read books? Why would they even talk to each other? > S/H systems can do more than follow syntactic rules for crunching words and symbols. They have no way to attach meanings to the symbols or to understand those meanings. Those semantic functions belong to the humans who program and operate them. I wish you'd stop dancing around and just say what you believe, humans have something that is not information (software) or matter (hardware), humans have a soul. I don't believe in the soul. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at m056832107.syzygy.com Sat Jan 16 18:03:32 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 16 Jan 2010 18:03:32 -0000 Subject: [ExI] Meaningless Symbols. In-Reply-To: <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> References: <580930c21001150634h42b32fdej404418b25bd6e54a@mail.gmail.com> <129394.58494.qm@web36507.mail.mud.yahoo.com> <20100115182313.5.qmail@syzygy.com> <4B50B764.8010003@satx.rr.com> <23995788-1242-41EC-8E62-F89C4717C6EB@bellsouth.net> Message-ID: <20100116180332.5.qmail@syzygy.com> John Clark writes: > Gordon thinks that genuine understanding is a completely >useless property for intelligent people or machines to have because they >would continue to act in exactly the same way whether they have >understanding or not. I'm not at all sure that's what Gordon thinks, although it is difficult to tell for sure. A claim he's made several times, but which seems to have mostly slipped by unnoticed, is that a program controlled neuron cannot be made to behave the same as a biological one. In discussing the partial replacement thought experiment he says that the surgeon will replace the initial set of neurons and find that they don't produce the desired behavior in the patient, so he has to go back and tweak things again. Everyone else seems to think Gordon means that tweaking is in the programming, and that eventually the surgeon manages to get the program right. He's actually said that the surgeon will need to go in and replace more and more of the patient's brain in order to get the patient to pass the Turing test, and that the extensive removal of biological neurons is what turns the patient into a zombie. Since Gordon also claims that neurons are computable, this seems to me to be a contradiction in his position. My guess at his response to this would be: Sure, neurons may be computable, but we don't CURRENTLY know enough about how they work to duplicate their behavior well enough to support consciousness. My reply to that would be: by the time we can make replacement neurons we will be very likely to know how they work in sufficient detail. In fact, we currently know a good deal about how they work. What we're missing is the wiring pattern. I'm going to also guess that Gordon thinks the thing we don't currently know how to do in making a programmatic neuron is to derive semantics from syntax. I think I remember him saying he believes this to eventually be possible, but that we currently have no clue how. So, Gordon seems to think that consciousness is apparent in behavior, and thus selectable by evolution. I think that's why he's not interested in your line of argument. Gordon: did I represent your position accurately here? -eric From aware at awareresearch.com Sat Jan 16 18:03:56 2010 From: aware at awareresearch.com (Aware) Date: Sat, 16 Jan 2010 10:03:56 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> Message-ID: On Sat, Jan 16, 2010 at 9:27 AM, Stefano Vaj wrote: > 2010/1/16 Ben Zaiboc : >> To claim that the complex thing does not differ in any important way from the simple thing is, I'll say it again, totally ridiculous. > > Yes and no. It appears that there is one single qualitative threshold > (and a pretty low one...) as far as "complexity" is concerned. Beyond > that, all conceivable degrees of complexity can be generated by > systems having attained the required level, and all that changes is > the performances in the completion of a given computation. See again a > New Kind of Science. Stefano, you are correct and Wolfram makes an important point about "computational irreducibility" in regard to our ability to predict the behavior of complex systems. Beyond a certain point, the only way to know is to run the system and observe the outcomes. But this does not imply that novel qualitative differences do not continue to emerge as the (4th Law of Thermodynamics?) result of stochastic discovery of synergies exploiting new degrees of freedom. More is indeed different. And all of this has virtually zero bearing on the exceedingly simple but excruciatingly nonintuitive *epistemological* puzzle of [meaning|semantics|consciousness|qualia|experience|intentionality]. - Jef From aware at awareresearch.com Sat Jan 16 18:37:32 2010 From: aware at awareresearch.com (Aware) Date: Sat, 16 Jan 2010 10:37:32 -0800 Subject: [ExI] Meaningless Symbols In-Reply-To: References: <724270.2104.qm@web113618.mail.gq1.yahoo.com> <580930c21001160927t131d2c6ck5a5405621c3516db@mail.gmail.com> Message-ID: This discussion shares much in common with PHIL101-type bantering common in college dorms--less the wine, beer and marijuana. If you want to gain some traction, might I suggest the following? I've given it to others with useful effect and I think I've posted it here before. If the purpose of this discussion is to increase understand