From johnkclark at gmail.com Sun Feb 1 00:51:57 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 31 Jan 2015 19:51:57 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CD5714.4070307@canonizer.com> References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> Message-ID: On Sat, Jan 31, 2015 Brent Allsop wrote: > > Sure, if your knowledge has a redness quality, you can think of this > redness quality as a lable, a label that all of our knowledge of things > that reflect 650 NM light has. You are talking about what this labeled > knowledge represents. I am talking about the qualitative nature, of the > label itself, > I think nature of the label itself, that is to say the essential qualia, comes from all the things in your memory that reflect 650 NM light and more importantly from the astronomically large number things and ideas that, although lacking the REDNESS qualia themselves nevertheless are associated with it. That I think is where qualia comes from. Let me put it another way, I don't think there would be any difference subjectively or objectively between somebody who saw everything in black and white and somebody who saw everything in black and red. > For the rest of your life, you wear red green color inverting glasses. > When you first do this, it is difficult, because you know your knowledge of > things that reflect 650 NM light are represented with knowledge that has or > is tagged with a redness quality, and things that represent things that > reflect 700 nm light are reprsented with greenness. And now this is > backwards, making it difficult at first. > OK. > > Eventually after a long period of time you will learn to associate and > bind the redness quality, with all the things before that were green, and > visa verse > Things would certainly look weird when I first put the glasses on because the new qualia associations would be inconsistent with my memory of the old ones, but as you say after a long time things would start to look normal again. But why would it take a long time? Because of the huge number of nested links and associations that would have to be reassigned for things to make sense again. If I then took the glasses off things would look weird again and would stay that way until my brain managed to put all those links back to where they were originally. > > It will eventually become very natural for you to be just as normal as > it was, before, > I agree. > > but you will know that your knowledge is very qualitatively different > than before you put on those glasses. > That is exactly the point where you and I differ on this. If all the links have been successfully reassigned then I don't see how in the world things would seem "very qualitatively different than before". But at the end of the day the really important question isn't the nature of REDNESS or GREENNESS it's the question I asked in my last post that you didn't answer, do you believe as I do that consciousness is fundamental? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrivera at alumni.virginia.edu Sun Feb 1 02:07:08 2015 From: hrivera at alumni.virginia.edu (Henry Rivera) Date: Sat, 31 Jan 2015 21:07:08 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> Message-ID: > On Jan 31, 2015, at 7:51 PM, John Clark wrote: > > I think nature of the label itself, that is to say the essential qualia, comes from all the things in your memory I'm going a bit off topic here from Brent's original question... I think it's clear that John doesn't understand, accept, and/or believe in the concept of qualia, specifically that it is something that could exist a priori to memory and humans, for that matter. This is an accepted concept in philosophy of mind, and such acceptance is pretty much required for any productive discourse. Reevaluating the premise that qualia exist at all is fair-game I guess, but Brent is way past that and doesn't have the patience to do that, you can see. I support Orch OR which postulates that qualia are part of space time itself, embedded at the Planck scale. This avoids metaphysical solutions to the problem. This page from a book hosted on Google addresses these concepts succinctly: https://books.google.com/books?id=dd45AAAAQBAJ&pg=PA94&lpg=PA1&focus=viewport&output=html_text -Henry From gsantostasi at gmail.com Sun Feb 1 02:26:27 2015 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sat, 31 Jan 2015 20:26:27 -0600 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> Message-ID: Well, qualia may be accepted as something that can exist a priori to memory and humans in philosophy but not in any real science. There are a lot of brilliant minds that dismiss the concept of qualia all together. And Orch OR is superstition. Qualia as neuroscience phenomenon can be explained easily if one thinks about bottom up processes vs top down. It is really all about this difference between processes like imagination, that is a top down process, and perception that is a bottom up one. The immediacy of qualia is due to the fact that they have to do with perception and talking, discussing, describing them has to do with top down processes. This why they seem so difficult to communicate and ineffable but there is nothing ineffable about them at all. And I agree with Brent that they are detectable if they are a real phenomenon; something that is not detectable doesn't simply exist in the physical world (the only world that exists). Giovanni On Sat, Jan 31, 2015 at 8:07 PM, Henry Rivera wrote: > > On Jan 31, 2015, at 7:51 PM, John Clark wrote: > > > > I think nature of the label itself, that is to say the essential qualia, > comes from all the things in your memory > > I'm going a bit off topic here from Brent's original question... > > I think it's clear that John doesn't understand, accept, and/or believe in > the concept of qualia, specifically that it is something that could exist a > priori to memory and humans, for that matter. This is an accepted concept > in philosophy of mind, and such acceptance is pretty much required for any > productive discourse. Reevaluating the premise that qualia exist at all is > fair-game I guess, but Brent is way past that and doesn't have the patience > to do that, you can see. > > I support Orch OR which postulates that qualia are part of space time > itself, embedded at the Planck scale. This avoids metaphysical solutions to > the problem. > > This page from a book hosted on Google addresses these concepts succinctly: > > https://books.google.com/books?id=dd45AAAAQBAJ&pg=PA94&lpg=PA1&focus=viewport&output=html_text > > -Henry > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Feb 1 05:07:37 2015 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 31 Jan 2015 21:07:37 -0800 Subject: [ExI] taxonomy for fermi paradox fans: Message-ID: On Sat, Jan 31, 2015 at 4:00 AM, "Flexman, Connor" wrote: snip > Just because our subjective time speeds up doesn't seem to imply a lack of > desire to optimize the cosmos for utils. I am not sure from what you write if you have your head around the subject. Consider it from the viewpoint of a person who is alive today and lives to a singularity event or is revived from cryonic suspension into a fast simulation. It looks possible to do a million to one speedup, so as a first pass guess, assume that. What has happened from their view point is that all the distances have increased, by a million times. Even the speed of light is slow, "A million-to-one speed up would impose a subjective round-trip delay of three days from one side of the earth to the other. Subjective round trip delay to the moon would be two months." http://web.archive.org/web/20121130232045/http://hplusmagazine.com/2012/04/12/transhumanism-and-the-human-expansion-into-space-a-conflict-with-physics/ If the population moves into a fast simulated environment, the subjective time to get to the stars becomes even more ridiculous than it is now. It's a local version of inflation. A single calendar year becomes a million years subjective. A million years isn't a lot in geological time, but civilization is less than 10,000 years old so this is 100 times that span. I once explained this to someone who was nothing short of horrified. (On the other hand, he had a cell phone.) I told him that he could have the job of watching the blinken blinken lights and if they quit blinking, he was to push the reset button and restart uploaded civilization from the last check point. I am prompted to think about this as a non fatal reason we don't see any aliens or their works. Keith It seems many of us would gladly > undertake the goal of sending colonizing expeditions to other galaxies even > if it took far past our lifetimes for them to arrive (provided all the > normal caveats of our ability to ensure the meaningfulness of the > colonizers' existence if they weren't humans, convergence of their values > with our own, etc.). I don't see why a sped-up civilization wouldn't do the > same. Subjective time might be sped up, but they can still attempt to > optimize the future. If they're undertaking speed-up at nanoscales, it's > also likely they have enough control that their lifetimes are vastly > extended in subjective time, if not longer than 100 years of our time. > Colonizing stars in our galaxy could be done many times in a lifetime. > Connor > -- > Non est salvatori salvator, > neque defensori dominus, > nec pater nec mater, > nihil supernum. > From stathisp at gmail.com Sun Feb 1 12:52:12 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 1 Feb 2015 23:52:12 +1100 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CD4BF4.9050409@canonizer.com> References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> Message-ID: On 1 February 2015 at 08:41, Brent Allsop > wrote: > > Hi Stathis, > > It's great to hear from you. And the target audience of this paper, is > intelligent people like you, so I really need help understanding how best to > comunicate to people with this POV. So, thank you for reading, and for > jumping in here. > > You are using untestable not well defined metaphysical terms when you talk > about "consciousness" like this: "this will never be able to tell you if > the subject being studied really is conscious". > > When you use the term "conscious" you are talking about composit qualia, or > all of what conscoiusness is, as something that is not easily completely > sharable in it's entirety. And you are providing no way to falsify any such > assertions. All I hear you saying is that consciousness is not > approachable via science. Aspects of consciousness, or if you prefer of qualia, can certainly be investigated scientifically, and a large part of neuroscience and psychology is devoted to doing just this. However, it is impossible to investigate scientifically if someone actually has qualia and what those qualia are like. > What I am trying to say, is that you can break composite "consciousness" and > composite qualia down to elemental qualities, like redness and greenness. > And that there is some kind of binding mechanism that binds them together, > so that you can be aware of redness and greenness, at the same time, and > know how qualitatively different they are. Like when a painter makes a > composit painting, using elemental color qualities, I am saying that you can > break conscoiusness down to effable, detectable, elemental qualities. I think you can break down consciousness in this way, but I don't think you can directly detect qualia. > On 1/31/2015 7:37 AM, Stathis Papaioannou wrote: > > > >Suppose, for example, you hypothesise that CMOS sensors in digital cameras > > have colour qualia. You could show experimentally the necessary and > > sufficient conditions for certain colour outputs, but how would this help > > you understand what, if anything, the sensor was experiencing? If you tried > > connecting it to your own brain and saw nothing, how would you know if that > > was because the sensor lacked qualia or because it doesn't interface > > properly with your brain? > > > You are thinking about this at the wrong level. CMOS systems can only do > intelligent operations if they have hardware that is interpreting that which > does not have consistent ones and zeros, as if it did. And it certainly > doesn't have anything like an elemental redness quality at that abstractly > operating, interpreted from it's diverse intrinsic physical qualities level. > > But, there is the possibility, that some stuff like CMOS, does have an > intrinsic qualitative nature, that can be bound up with other qualities the > way our brain binds things with redness and greenness up. And interpreting > the way CMOS acts as only colorless ones and zeros, is being blind to the > qualitative nature that it could have. Zombie information can represent > everything about the qualitative nature of CMOS, but you can only know what > the qualitative nature of the same is, if you interpenetrate, correctly, > what you are detecting, not some interpreted pieces of zombie information we > think of it as having. If you claim to be able to detect qualia then what test do you propose to use to decide whether CMOS sensors have "an intrinsic qualitative nature" or not? [SP] > The other point I would like to make is that (it seems to me) you have > misunderstood the neural substitution thought experiment you describe near > the end of the paper. Suppose glutamate is responsible for redness qualia, > and you replace the glutamate with an analogue that functions just like > glutamate in every observable way, except it lacks the qualia. The subject > will then accurately describe red objects, say he sees red, and honestly > believe that he sees red. How would you show that he does not actually see > red? How would you know that your own red qualia were not eliminated last > night while you slept by installing such a mechanism? > > > No, you know I fully understand this argument. Chalmers points out multiple > possible ways science could demonstrate what happens, subjectively, when you > do this neural substitution. You only consider the view that it will be > possible to do it, just as described, so there is a conundrum. So if your > interpretation, leads to such contradictions, then you are going down the > wrong path. Why do you refuse to consider any other possibility? > > Chalmers points out there is a vanashing qualia and fading qualia options > you are not considering. I don't like the way he describes these, because > they are very metaphysical and non testable predictions about what is > happening. So, if you assume a 3 color world, like that described in the > paper, the theory makes testable predictions about how the qualitatively > consciousness scientists will discover, when they do the neuro substitution > experiment. Nothing they present to the binding system will ever have a > redness quality, except that which really has redness, so it will be a kind > of vanishing qualia. It's not problematic imagining that the qualia would vanish if the substitution were made with parts lacking the redness quality. What is problematic - and the entire point of the experiment - is that the qualia would vanish **without either the subject or the experimenters noticing that anything had changed**. > The critical part of the neuro substitution experiment, is adding in the > hardware interpereters, for every piece of hardware replacing the knowledge > being represented with qualitative properties. Sure, you know how to > interpret what the zombie knowledge represents, it can be thought of as > behaving the same way. And, once you replace the binding mechanism, and all > that does have true qualitative nature, it will be possible to think of it > as being the same thing. But, by definition, the zombie information will > not have redness, it can only be interpreted as and thought of, as if it > does. > > And sure, this is a very simplistic theory. But the prediction is, that > this is just an example of how to cross the qualitative knowledge boundary > in one possible world. And the prediction is, that a simple variation on > this theory will make it possible to bridge this knowledge gap in the real > world. > > Does any of that help? > > Brent Allsop > > > > > On 1/31/2015 7:37 AM, Stathis Papaioannou wrote: > > > > On Saturday, 31 January 2015, Brent Allsop > > wrote: >> >> >> On 1/30/2015 7:43 AM, William Flynn Wallace wrote: >> >> One side of this debate says that subjective experiences are metaphysical. >> So I have two comments: >> >> 1 - How does one go about proving the existence of something metaphysical? >> By proving that physical causes don't exist for that experience? Isn't >> that trying to prove a negative? >> >> 2 - Since nothing has ever been shown to be metaphysical (no way to >> measure it), why would one ever start from that as an assumption? Why, in >> fact, believe in anything at all metaphysical, in the most literal sense? >> Demons and angels? Ghosts? (It does seem that many people will believe in >> these things rather than what science says. If anyone has any doubt that we >> are an intellectually flawed species, just look at that fact.) >> >> In short, there seems to me to be no way to establish that metaphysical >> causes exists for anything. At least, no scientific way. Playing with >> words, thought experiments, and just sheer sophistry don't do the job. >> >> >> Either you didn't read the paper entitled "Detecting Qualia" >> ( https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit?usp=sharing ) >> or you didn't understand any of it. You must have at least read the title: >> "Detecting Qualia", but evidently you refuse to understand what most people >> understand such to mean, as proof by you asserting that there is "no way to >> measure it". Since you don't seem to get it, I guess I'll have to explain >> it to you: Detecting, is the same as measuring, and if it is detectable, it >> is physical, and experimentally demonstrably to all to be physical, just >> like all physics. >> >> Brent Allsop > > > Dear Brent, > > I've read the paper. Maybe I haven't understood it properly, but it seems to > me that the main thing you have in mind when talking about "effing the > ineffable" is the neural correlates of consciousness, and this will never be > able to tell you if the subject being studied really is conscious, let alone > what the actual conscious experience is like. > > Suppose, for example, you hypothesise that CMOS sensors in digital cameras > have colour qualia. You could show experimentally the necessary and > sufficient conditions for certain colour outputs, but how would this help > you understand what, if anything, the sensor was experiencing? If you tried > connecting it to your own brain and saw nothing, how would you know if that > was because the sensor lacked qualia or because it doesn't interface > properly with your brain? > > The other point I would like to make is that (it seems to me) you have > misunderstood the neural substitution thought experiment you describe near > the end of the paper. Suppose glutamate is responsible for redness qualia, > and you replace the glutamate with an analogue that functions just like > glutamate in every observable way, except it lacks the qualia. The subject > will then accurately describe red objects, say he sees red, and honestly > believe that he sees red. How would you show that he does not actually see > red? How would you know that your own red qualia were not eliminated last > night while you slept by installing such a mechanism? > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Stathis Papaioannou -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Sun Feb 1 14:00:49 2015 From: anders at aleph.se (Anders Sandberg) Date: Sun, 1 Feb 2015 15:00:49 +0100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: Message-ID: <1164451982-9676@secure.ericade.net> Keith Henson , 30/1/2015 7:58 AM: On Wed, Jan 28, 2015 at 4:00 AM, John Clark wrote: snip > 2) Some catastrophe hits a civilization when it gets a little past our > level; my best guess would be the electronic equivalent of drug abuse. Possible. ?But it seems an unlikely filter to get all possible variations on a nervous system if ET's with the capacity to affect the visible state of the universe are common. ?I suspect you need something fundamental that keeps every single one of them from spreading out. Exactly. This is something that needs to be reiterated again and again in discussions like this: just because something gets 99% of the population of 99% of species doesn't mean it works as a Fermi answer. The remaining 1% of the 99% affected civilizations and the 1% of unaffected civilizations will still make a lot of noise. At best it gives you a reduction of an already low number of civilization appearances. This is why cultural convergence to some kind of addiction or tiny dense fast objects are not good enough to satisfy.? Cultural convergence that gets *everybody*, whether humans, oddly programmed AGIs, silicon-based zorgons, the plant-women of Canopus III, or the sentient neutronium vortices of Geminga, that has to be something really *weird*. Even among humans we can typically find some exceptions from human "cultural universals", and that is within a single fairly homogeneous species, not intelligence in general.? This would leave the universe full of isolated civilizations that stay small for speed of light limitations. Sped up, how long would a civilization last? ?If the ratio was a million to one, a century of clock time would be 100 million years subjective. But again, this is a soft constraint. It might be beneficial for 99% of all civilizations and 99% of their population, but the leftovers will be noticeable. Plus, human individuals and civilizations undertake projects that stretch far beyond their lifetimes (whether building cathedrals or launching space probes): it is not inconceivable that during those 100 million years of civilization entire cultures may arise that feel a philosophical, religious, artistic or pranksterish need to launch colonization to colonize and/or reshape parts of the universe into something they think it should be. And if such offshoot behave the same way we will have a spread of dense fast clusters at a rate that from the outside looks pretty brisk, despite from the inside being a rare and epic undertaking.? PS ?Busy lately, but have a reply to Anders re brain size limits on my list to do. Looking for forward to it. Am writing a post about energy use in technological singularities right now. Eric Chaisson's writings on energy rate density increase with complexity are interesting in the intersection of these two topics.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrivera at alumni.virginia.edu Sun Feb 1 16:21:03 2015 From: hrivera at alumni.virginia.edu (Henry Rivera) Date: Sun, 1 Feb 2015 11:21:03 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> Message-ID: <74442793-0CB7-4B79-A8B1-68081B1C7C17@alumni.virginia.edu> > On Jan 31, 2015, at 9:26 PM, Giovanni Santostasi wrote: > > Well, qualia may be accepted as something that can exist a priori to memory and humans in philosophy but not in any real science. There are a lot of brilliant minds that dismiss the concept of qualia all together. And Orch OR is superstition. I hear you and get that it hard to imagine how one could detect or measure qualia with current scientific methods, but our methods will improve over time. All hypotheses in science can be regarded as superstitious until they have evidence in support of them or against a null hypothesis. However, I want to point out that Orch OR is probably the most scientific model proposed in philosophy of mind. Its two founders are scientists, I am a scientist, and Orch OR offers 20 _testable_ predictions to assess its validity published in 1998, of which six are confirmed and none refuted last time I checked. From johnkclark at gmail.com Sun Feb 1 18:37:04 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 1 Feb 2015 13:37:04 -0500 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: On Sun, Feb 1, 2015 at 12:07 AM, Keith Henson wrote: > > > Consider it from the viewpoint of a person who is alive today and lives > to a singularity event or is revived from cryonic suspension into a fast > simulation. It looks possible to do a million to one speedup I agree, and you're probably being conservative. > > If the population moves into a fast simulated environment, the subjective > time to get to the stars becomes even more ridiculous than it is now. ET doesn't need to travel to the stars, ET just needs to send one Von Neumann probe to one star, and then almost instantly from a cosmic perspective (less than 50 million years, perhaps much less) the entire Galaxy would be unrecognizable. And it's not as if this would take some huge commitment on the part of ET's civilization, in fact even a individual could easily do it. If Von Neumann probes are possible at all, and I can't think why they wouldn't be, then they're going to be dirt cheap, you buying a bag of peanuts would be a greater drag on your financial resources. > , > > I am prompted to think about this as a non fatal reason we don't see any > aliens or their works. > I am having difficulty grasping the argument that the reason we can't see any changes that ET made to the universe with even our biggest telescopes is because ET can make changes a million times faster than we can. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sun Feb 1 18:37:54 2015 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sun, 1 Feb 2015 12:37:54 -0600 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: I know it sounds difficult to swallow. But there is only one logical solution to the Fermi's Paradox. We are the first "advanced" civilization in the galaxy if not the entire visible universe. We are still fragile but once we colonize other planets like Mars we should be almost impossible to eradicate. Giovanni On Fri, Jan 30, 2015 at 12:19 PM, John Clark wrote: > > On Fri, Jan 30, 2015 at 1:56 AM, Keith Henson > wrote: > >> >> >> Some catastrophe hits a civilization when it gets a little past >>> our level; my best guess would be the electronic equivalent of drug abuse. >> >> >> > Possible. But it seems an unlikely filter to get all >> possible variations on a nervous system if ET's with the capacity to affect >> the visible state of the universe are common. I suspect you need something >> fundamental that keeps every single one of them from spreading out. >> > > But that's exactly my fear, it may be fundamental. If they can change > anything in the universe then they can change the very thing that makes the > changes, themselves. There may be something about intelligence and positive > feedback loops (like having full control of your emotional control panel) > that always leads to stagnation. After all, regardless of how well our life > is going who among us would for eternity opt out of becoming just a little > bit happier if all it took was turning a knob? And after you turn it a > little bit and see how much better you feel why not turn it again, perhaps > a little more this time. > > The above may be pure nonsense, I sure hope so. > > John K Clark > > > > > > > > > >> >> I have proposed that speeding up is universally desirable and >> obtainable on a scale that puts even the nearest stars millions of >> subjective years distant. This would leave the universe full of >> isolated civilizations that stay small for speed of light limitations. >> Sped up, how long would a civilization last? If the ratio was a >> million to one, a century of clock time would be 100 million years >> subjective. >> >> I have no idea of how long a civilization might last, but 100 million >> years seems like a long time. >> >> Keith >> >> PS Busy lately, but have a reply to Anders re brain size limits on my >> list to do. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sun Feb 1 18:43:35 2015 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sun, 1 Feb 2015 12:43:35 -0600 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: Keith, The speed up hypothesis makes no sense. If you can speed up, you can also slow down. If an advance civilizations masters suspended animation trips of billion of light years could be experienced as just lasting minutes (in particular if one can also travel close to c). So slowing down of consciousness is not a solution to the Fermi's paradox. Giovanni On Sat, Jan 31, 2015 at 11:07 PM, Keith Henson wrote: > On Sat, Jan 31, 2015 at 4:00 AM, "Flexman, Connor" > wrote: > > snip > > > Just because our subjective time speeds up doesn't seem to imply a lack > of > > desire to optimize the cosmos for utils. > > I am not sure from what you write if you have your head around the subject. > > Consider it from the viewpoint of a person who is alive today and > lives to a singularity event or is revived from cryonic suspension > into a fast simulation. It looks possible to do a million to one > speedup, so as a first pass guess, assume that. > > What has happened from their view point is that all the distances have > increased, by a million times. Even the speed of light is slow, "A > million-to-one speed up would impose a subjective round-trip delay of > three days from one side of the earth to the other. Subjective round > trip delay to the moon would be two months." > > > http://web.archive.org/web/20121130232045/http://hplusmagazine.com/2012/04/12/transhumanism-and-the-human-expansion-into-space-a-conflict-with-physics/ > > If the population moves into a fast simulated environment, the > subjective time to get to the stars becomes even more ridiculous than > it is now. It's a local version of inflation. A single calendar year > becomes a million years subjective. > > A million years isn't a lot in geological time, but civilization is > less than 10,000 years old so this is 100 times that span. > > I once explained this to someone who was nothing short of horrified. > (On the other hand, he had a cell phone.) I told him that he could > have the job of watching the blinken blinken lights and if they quit > blinking, he was to push the reset button and restart uploaded > civilization from the last check point. > > I am prompted to think about this as a non fatal reason we don't see > any aliens or their works. > > Keith > > It seems many of us would gladly > > undertake the goal of sending colonizing expeditions to other galaxies > even > > if it took far past our lifetimes for them to arrive (provided all the > > normal caveats of our ability to ensure the meaningfulness of the > > colonizers' existence if they weren't humans, convergence of their values > > with our own, etc.). I don't see why a sped-up civilization wouldn't do > the > > same. Subjective time might be sped up, but they can still attempt to > > optimize the future. If they're undertaking speed-up at nanoscales, it's > > also likely they have enough control that their lifetimes are vastly > > extended in subjective time, if not longer than 100 years of our time. > > Colonizing stars in our galaxy could be done many times in a lifetime. > > Connor > > -- > > Non est salvatori salvator, > > neque defensori dominus, > > nec pater nec mater, > > nihil supernum. > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Feb 1 18:44:29 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 1 Feb 2015 13:44:29 -0500 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: <1164451982-9676@secure.ericade.net> References: <1164451982-9676@secure.ericade.net> Message-ID: On Sun, Feb 1, 2015 at 9:00 AM, Anders Sandberg wrote: > Cultural convergence that gets *everybody*, whether humans, oddly > programmed AGIs, silicon-based zorgons, the plant-women of Canopus III, or > the sentient neutronium vortices of Geminga, that has to be something > really *weird*. Yes but do you thing the confluence of positive feedback loops and intelligence might produce effects that are weird enough? I hope not but that's my fear. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Sun Feb 1 18:47:07 2015 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sun, 1 Feb 2015 12:47:07 -0600 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: I agree, the slow time idea makes no sense at all as a solution of the Fermi's paradox. On Sun, Feb 1, 2015 at 12:37 PM, John Clark wrote: > > On Sun, Feb 1, 2015 at 12:07 AM, Keith Henson > wrote: > >> >> > Consider it from the viewpoint of a person who is alive today and lives >> to a singularity event or is revived from cryonic suspension into a fast >> simulation. It looks possible to do a million to one speedup > > > I agree, and you're probably being conservative. > >> > > If the population moves into a fast simulated environment, >> the subjective time to get to the stars becomes even more ridiculous >> than it is now. > > > ET doesn't need to travel to the stars, ET just needs to send one Von > Neumann probe to one star, and then almost instantly from a cosmic > perspective (less than 50 million years, perhaps much less) the entire > Galaxy would be unrecognizable. And it's not as if this would take some > huge commitment on the part of ET's civilization, in fact even a individual > could easily do it. If Von Neumann probes are possible at all, and I can't > think why they wouldn't be, then they're going to be dirt cheap, you buying > a bag of peanuts would be a greater drag on your financial resources. > >> , > >> > I am prompted to think about this as a non fatal reason we don't >> see any aliens or their works. >> > > I am having difficulty grasping the argument that the reason we can't see > any changes that ET made to the universe with even our biggest telescopes > is because ET can make changes a million times faster than we can. > > John K Clark > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Sun Feb 1 19:25:42 2015 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 1 Feb 2015 12:25:42 -0700 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: If your goal includes living a long perceived life, then slowing down would be counterproductive. Besides, you can simulate a lot during that kind of time period if you have a portable energy source. So Giovanni, I see a reason to speed up, but slightly less reason to slow down. -Kelly On Sun, Feb 1, 2015 at 11:43 AM, Giovanni Santostasi wrote: > Keith, > The speed up hypothesis makes no sense. If you can speed up, you can also > slow down. If an advance civilizations masters suspended animation trips of > billion of light years could be experienced as just lasting minutes (in > particular if one can also travel close to c). > So slowing down of consciousness is not a solution to the Fermi's paradox. > > Giovanni > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Feb 1 19:51:40 2015 From: pharos at gmail.com (BillK) Date: Sun, 1 Feb 2015 19:51:40 +0000 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: On 1 February 2015 at 18:37, John Clark wrote: > ET doesn't need to travel to the stars, ET just needs to send one Von > Neumann probe to one star, and then almost instantly from a cosmic > perspective (less than 50 million years, perhaps much less) the entire > Galaxy would be unrecognizable. And it's not as if this would take some huge > commitment on the part of ET's civilization, in fact even a individual could > easily do it. If Von Neumann probes are possible at all, and I can't think > why they wouldn't be, then they're going to be dirt cheap, you buying a bag > of peanuts would be a greater drag on your financial resources. > > I am having difficulty grasping the argument that the reason we can't see > any changes that ET made to the universe with even our biggest telescopes is > because ET can make changes a million times faster than we can. > > It is because ET *thinks* a million times faster than us. But chemical reactions still take the same time. If it takes a subjective 10,000 years to do one spot-weld, then you are not going to do many. In theory, robots could do the job, but building the robots takes too long (subjective time). That's why ET probably retreats into virtual reality that reacts at the same speed as their thinking. Humans are finding the same thing already. It is far easier (and safer) to make a virtual reality SF world than actually build physical stuff to go to Mars. (World of Warcraft?). As for voluntarily slowing down their processing, I think that is a rather obvious non-idea. It would be like voluntarily 'dying' for thousands of years. Humans could almost do that already. We can't stop ageing yet, but you could travel into the future as soon a workable hibernation technique is developed. (NASA are already looking at this for Mars trips). But would there be many takers for this trip into the future? A human from only 100 years ago would face considerable problems re-educating themselves to the modern environment. They would probably need a 'carer' to look after them while they tried to adjust. As for thousands of years - forget it. You would never adjust. BillK From brent.allsop at canonizer.com Sun Feb 1 23:11:51 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 01 Feb 2015 16:11:51 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> Message-ID: <54CEB2B7.70804@canonizer.com> Hi John, On 1/31/2015 5:51 PM, John Clark wrote: > But at the end of the day the really important question isn't the nature of REDNESS or GREENNESS it's the question I asked in my last post that you didn't answer, do you believe as I do that consciousness is fundamental? > John K Clark I think there is elemental fundamental stuff in nature, and that this behaves in fundamental ways. We call this the laws of nature. For example, we know that mass, because of gravity, attracts other mass. We don't know why it does, just that it does. And this knowledge enables us to dance in the heavens. This theory also predicts that this elemental fundamental stuff, in addition to behaving according to these laws, also has fundamental qualities, like redness. It predicts that particular qualities are one and the same as particular behaviors. The prediction is that nature builds our composite consciousness out of these elemental qualities. In the 3 color world, the scientists don't know why glutamate has a redness quality, just that it does. And the prediction is that we will be able to detect these qualities, but only if the zombie information we use to detect them is interpreted correctly. And once we can do this, in addition to dancing in the heavens, we will be able to significantly expand our visual knowledge of visible light, from having 2, 3, or 4 primary colors, to representing it with hundreds or more. On 1/31/2015 5:51 PM, John Clark wrote: > I don't think there would be any difference subjectively or > objectively between somebody who saw everything in black and white and > somebody who saw everything in black and red. > Oh, this is great. I think, then, we completely agree on everything. Obviously, there is some difference between this white and red, otherwise this would be a meaningless sentence to you and I. And, I completely agree that, behaviorally, and intelligently, at least, both zombie knowledge and inverted knowledge, can all act the same, and be just as intelligent. And I agree that, like Dennet does, you can choose to only focus on this behavior, and ignore (or 'quine' as Dan likes to call it instead of ignore) qualia. Everything behaves the same, at least until you ask them: "What is red like for you". In order to know the difference, you need to be able to compare the two elemental qualities in the same binding mechanism / consciousness. And instead of quining qualia, as you are, we are focusing on qualia and making testable scientific descriptions about what does and does not have qualia, which can be experimentally proven to all, to be right. You must admit, that when we first throw the switch, and you for the first time experience a new blue, and from then on your knowledge of visible light is much more diverse and phenomenal, it will be quite convincing of which theory is right? Brent From brent.allsop at canonizer.com Sun Feb 1 23:44:01 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 01 Feb 2015 16:44:01 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> Message-ID: <54CEBA41.6020300@canonizer.com> Hi Stathis, On 2/1/2015 5:52 AM, Stathis Papaioannou wrote: > > Aspects of consciousness, or if you prefer of qualia, can certainly be > investigated scientifically, and a large part of neuroscience and > psychology is devoted to doing just this. However, it is impossible to > investigate scientifically if someone actually has qualia and what > those qualia are like. > When I say you believe this is not approachable via science, I am talking about the latter, which you clearly state is not approachable via science. In the latter you are making the falsifiable prediction that you cannot eff the ineffable. > > > If you claim to be able to detect qualia then what test do you propose > to use to decide whether CMOS sensors have "an intrinsic qualitative > nature" or not? > The prediction is that if CMOS's behavior is the same as some quality (which we have likely never experienced before) that we will be able to present it to our augmented binding system in a way that will enable us to compare it's quality to all the other qualities we have. Before we do this, we will be like Mary, and know everything about the behavior of CMOS. But once we know what our zombine information description of CMOS qualitatively represents, we will also know, qualitatively, what CMOS is like. > > It's not problematic imagining that the qualia would vanish if the > substitution were made with parts lacking the redness quality. What is > problematic - and the entire point of the experiment - is that the > qualia would vanish **without either the subject or the experimenters > noticing that anything had changed**. > That explains our miss communication, then. What I was trying to say, and what this says you missed, is that the testable theoretical prediction is that you will not be able to get or experience redness without presenting glutamate (or replace glutamate with whatever your favorite theory predicts is responsible for elemental redness) to the binding system of your mind. Only when you replace the entire binding system, with a binding system that is interpreting zombie information representing redness, as if it was real redness, will it behave the same. So, it will be behaving the same, but the qualitative subjective nature of it's behavior will have completely faded, and be absent. The system only behaves the way it does, because it contains interpreting hardware that is properly interpreting the zombie information as if it was the real thing. This is a form of the vanishing qualia case David Chalmers predicts is possible, right? Brent Allsop From johnkclark at gmail.com Sun Feb 1 23:45:57 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 1 Feb 2015 18:45:57 -0500 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: On Sun, Feb 1, 2015 at 2:51 PM, BillK wrote: > It is because ET *thinks* a million times faster than us. And the arms on ET's nanomachines would move back and forth about a billion times for every time our arm can move back and forth once, and ET would control about a hundred thousand million billion trillion arms. > > But chemical reactions still take the same time. Yep, chemicals like the ones in your eye will still take .00000000000032 seconds to change shape when a photon hits them. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 1 23:53:51 2015 From: spike66 at att.net (spike) Date: Sun, 1 Feb 2015 15:53:51 -0800 Subject: [ExI] cool, singularity hub's take on ai Message-ID: <06a501d03e7a$55340870$ff9c1950$@att.net> Check this: http://singularityhub.com/2015/01/31/as-the-powerful-argue-ai-ethics-might-s uperintelligence-arise-on-the-fringes/ spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 2 00:18:28 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 2 Feb 2015 11:18:28 +1100 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CEBA41.6020300@canonizer.com> References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> Message-ID: On 2 February 2015 at 10:44, Brent Allsop wrote: > > Hi Stathis, > > On 2/1/2015 5:52 AM, Stathis Papaioannou wrote: >> >> >> Aspects of consciousness, or if you prefer of qualia, can certainly be >> investigated scientifically, and a large part of neuroscience and psychology >> is devoted to doing just this. However, it is impossible to investigate >> scientifically if someone actually has qualia and what those qualia are >> like. >> > > When I say you believe this is not approachable via science, I am talking > about the latter, which you clearly state is not approachable via science. > In the latter you are making the falsifiable prediction that you cannot eff > the ineffable. > >> >> >> If you claim to be able to detect qualia then what test do you propose to >> use to decide whether CMOS sensors have "an intrinsic qualitative nature" or >> not? >> > > The prediction is that if CMOS's behavior is the same as some quality (which > we have likely never experienced before) that we will be able to present it > to our augmented binding system in a way that will enable us to compare it's > quality to all the other qualities we have. Before we do this, we will be > like Mary, and know everything about the behavior of CMOS. But once we know > what our zombine information description of CMOS qualitatively represents, > we will also know, qualitatively, what CMOS is like. Can you give an example of how you would go about this? >> It's not problematic imagining that the qualia would vanish if the >> substitution were made with parts lacking the redness quality. What is >> problematic - and the entire point of the experiment - is that the qualia >> would vanish **without either the subject or the experimenters noticing that >> anything had changed**. >> > > That explains our miss communication, then. What I was trying to say, and > what this says you missed, is that the testable theoretical prediction is > that you will not be able to get or experience redness without presenting > glutamate (or replace glutamate with whatever your favorite theory predicts > is responsible for elemental redness) to the binding system of your mind. I understand this: we will assume that you need *real* glutamate to have the redness experience. So if we use ersatz glutamate, that functions just like real glutamate but isn't real glutamate, you should get normal behaviour but absent or different qualia. We could perhaps do this experiment by replacing normal glutamate with glutamate made from different isotopes such as C-14 and O-17. This ersatz glutamate will function chemically perfectly normally, but your claim is that normal function is not enough to reproduce the qualia, you need the actual substance. So what do you predict would happen if the natural glutamate were replaced with ersatz glutamate? > Only when you replace the entire binding system, with a binding system that > is interpreting zombie information representing redness, as if it was real > redness, will it behave the same. So, it will be behaving the same, but the > qualitative subjective nature of it's behavior will have completely faded, > and be absent. The system only behaves the way it does, because it contains > interpreting hardware that is properly interpreting the zombie information > as if it was the real thing. This is a form of the vanishing qualia case > David Chalmers predicts is possible, right? Chalmers says that this would lead to a partial zombie, someone who is blind but says he can see normally and behaves as if he can see normally. He stops short of saying this is absurd, but I think if you allow for the possibility of partial zombies the whole philosophical edifice crumbles. -- Stathis Papaioannou From johnkclark at gmail.com Mon Feb 2 01:47:36 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 1 Feb 2015 20:47:36 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CEB2B7.70804@canonizer.com> References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> Message-ID: On Sun, Feb 1, 2015 PM, Brent Allsop wrote: >> But at the end of the day the really important question isn't the nature >> of REDNESS or GREENNESS it's the question I asked in my last post that you >> didn't answer, do you believe as I do that consciousness is fundamental? > > > > I think there is elemental fundamental stuff in nature, and that this > behaves in fundamental ways. We call this the laws of nature. For example, > we know that mass, because of gravity, attracts other mass. We don't know > why it does, just that it does. And this knowledge enables us to dance in > the heavens. This theory also [...] If it's a theory then it's not fundamental, in theories we hypothesize that abstract things like mass, gravity and light have something to do with the only thing we know with absolute certainty, concrete direct experience. And I still don't think you answered my question about consciousness being fundamental. Do you think the series of all "why" continue for infinity or do some of them eventually hit a brute fact? I think data processed intelligently producing intelligently is just such a brute fact. > In the 3 color world, the scientists don't know why glutamate has a > redness quality, just that it does. Something as simple as a chemical can't have anything to do with redness or any qualia, redness is a label, a label that under certain circumstances can be pleasant to apprehend and in other circumstances horrifying, a label made from a astronomically large number of memory associations and nested links. >> I don't think there would be any difference subjectively or objectively >> between somebody who saw everything in black and white and somebody who saw >> everything in black and red. >> > > >Oh, this is great. I think, then, we completely agree on everything. > Obviously, there is some difference between this white and red, otherwise > this would be a meaningless sentence to you and I. The difference between red and white is meaningful to us because our eye can register spots of light in 2 dimensions, intensity and wavelength, but to those who can only do so in one dimension, like both the black-white and black-red people, the difference would be meaningless. > Everything behaves the same, at least until you ask them: "What is red > like for you". And one of them would say "red is what gives black contrast, without it vision would be useless", and the other one would say "I understand completely, in my language we call that white". John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon Feb 2 03:54:52 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 1 Feb 2015 22:54:52 -0500 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: On Sun, Feb 1, 2015 at 2:25 PM, Kelly Anderson wrote: > If your goal includes living a long perceived life, then slowing down > would be counterproductive. Besides, you can simulate a lot during that > kind of time period if you have a portable energy source. > > So Giovanni, I see a reason to speed up, but slightly less reason to slow > down. > ### On a lark you decide to fork a copy to be downloaded to a von Neumann probe, which is going to be about the size of a beer can (+ tons of weight in the laser sail, fusion engine, and the nanotech that eats the sail and transforms it into reaction mass for braking). You could run your copy at nominal speed, implying a trillion subjective years of boredom and on-board politics. You could put the copy into stasis until in reaches orbit around Alpha Centauri Bb and starts eating it to make more von Neumann probes. Which one would you choose? It's the same argument as choosing between a generation starship and hibernation when sending biological humans to the same destination (not that I think it's ever going to happen). I haven't heard anybody objecting to the idea of hibernation as a matter of principle, while s-f about generation starships tends to have a distinctly dated flavor, this idea having long since fallen out of favor among hard s-f fans. If biological hibernation is a reasonable solution to passing a gulf of time and space, then its cybernetic equivalent is just as reasonable and obvious. Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon Feb 2 04:07:29 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 1 Feb 2015 23:07:29 -0500 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: On Sun, Feb 1, 2015 at 2:51 PM, BillK wrote: > > > It is because ET *thinks* a million times faster than us. But chemical > reactions still take the same time. If it takes a subjective 10,000 > years to do one spot-weld, then you are not going to do many. ### If it takes a subjective 10,000 years to do a spot weld, and if you need a spot weld, you will fork a copy running at just the right speed to do it, and wildly in love with spot welding. Bill, the essence of your argument in the context of the Fermi paradox is that high-speed cognition is inherently so unstable that it always prevents real-world actions to be completed. As soon as you speed up a mind, looking at it from a normal human vantage point, it goes crazy and stops acting or speaking. Ennui and end-of-life burnout at the blink of an eye, always and for every possible mind. This is a highly implausible idea, equivalent to saying that high level superhuman intelligence is in principle impossible. > Humans are finding the same thing already. It is far easier (and > safer) to make a virtual reality SF world than actually build physical > stuff to go to Mars. (World of Warcraft?). > ### WoW is losing subscribers. Not everybody sees value in wanking forever. BTW, most of my characters are on level 96 or higher now, in only 3 months all of them will be at level 100. Then I can lapse my subscription until the next expansion. Wanking is fun but only intermittently. Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon Feb 2 04:16:03 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 1 Feb 2015 23:16:03 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <74442793-0CB7-4B79-A8B1-68081B1C7C17@alumni.virginia.edu> References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <74442793-0CB7-4B79-A8B1-68081B1C7C17@alumni.virginia.edu> Message-ID: On Sun, Feb 1, 2015 at 11:21 AM, Henry Rivera wrote: > However, I want to point out that Orch OR is probably the most scientific > model proposed in philosophy of mind. Its two founders are scientists, I am > a scientist, and Orch OR offers 20 _testable_ predictions to assess its > validity published in 1998, of which six are confirmed and none refuted > last time I checked. ## What are they? I find the theory fails the LOL test but I could be wrong. Tell me about non-obvious confirmed biophysical predictions and I might stop lol-ing. Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Mon Feb 2 04:18:30 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 01 Feb 2015 21:18:30 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> Message-ID: <54CEFA96.4010709@canonizer.com> Hi Stathis, On 2/1/2015 5:18 PM, Stathis Papaioannou wrote: > On 2 February 2015 at 10:44, Brent Allsop wrote: >> Hi Stathis, >> >> On 2/1/2015 5:52 AM, Stathis Papaioannou wrote: >>> >>> Aspects of consciousness, or if you prefer of qualia, can certainly be >>> investigated scientifically, and a large part of neuroscience and psychology >>> is devoted to doing just this. However, it is impossible to investigate >>> scientifically if someone actually has qualia and what those qualia are >>> like. >>> >> When I say you believe this is not approachable via science, I am talking >> about the latter, which you clearly state is not approachable via science. >> In the latter you are making the falsifiable prediction that you cannot eff >> the ineffable. >> >>> >>> If you claim to be able to detect qualia then what test do you propose to >>> use to decide whether CMOS sensors have "an intrinsic qualitative nature" or >>> not? >>> >> The prediction is that if CMOS's behavior is the same as some quality (which >> we have likely never experienced before) that we will be able to present it >> to our augmented binding system in a way that will enable us to compare it's >> quality to all the other qualities we have. Before we do this, we will be >> like Mary, and know everything about the behavior of CMOS. But once we know >> what our zombine information description of CMOS qualitatively represents, >> we will also know, qualitatively, what CMOS is like. > Can you give an example of how you would go about this? One of many theoretical falsifiable possibilities, would be replace glutamate in the synapse with CMOS. If you did this, and experienced the new quality of CMOS, you would then for the first time, experience the new qualia, and finally know, qualitatively, what CMOS was like. Then, like zombie Mary, who before walking out of the room, knew everything about how CMOS behaved, would finally know how to qualitatively interpret that zombie information representing everything about the CMOS quality. And we/she would finally no longer be CMOS zombies. > >>> It's not problematic imagining that the qualia would vanish if the >>> substitution were made with parts lacking the redness quality. What is >>> problematic - and the entire point of the experiment - is that the qualia >>> would vanish **without either the subject or the experimenters noticing that >>> anything had changed**. >>> >> That explains our miss communication, then. What I was trying to say, and >> what this says you missed, is that the testable theoretical prediction is >> that you will not be able to get or experience redness without presenting >> glutamate (or replace glutamate with whatever your favorite theory predicts >> is responsible for elemental redness) to the binding system of your mind. > I understand this: we will assume that you need *real* glutamate to > have the redness experience. So if we use ersatz glutamate, that > functions just like real glutamate but isn't real glutamate, you > should get normal behaviour but absent or different qualia. We could > perhaps do this experiment by replacing normal glutamate with > glutamate made from different isotopes such as C-14 and O-17. This > ersatz glutamate will function chemically perfectly normally, but your > claim is that normal function is not enough to reproduce the qualia, > you need the actual substance. So what do you predict would happen if > the natural glutamate were replaced with ersatz glutamate? I don't understand why you are bringing this up. The prediction is that you will be able to, through trial and error, find all possible necessary and sufficient detectable properties that enable you to reliably predict when someone is experiencing real redness, and when someone is not. If you ever discover any detectable property that produces redness, that you didn't know had a redness quality, before, your previous theory will have been falsified and you must then simply alter your sets of necessary and sufficient detectable properties, to include the new property. The same is true with all other physics. Glutamate is well defined, you can make high quality detectors of glutamate, that will only give a positive result, with real glutamate, and nothing else. The falsifiable prediction is that detecting real redness will be the same as detecting real glutamate. If ersatz glutamate has a redness quality, then you include that in the set of possible detectable properties. If altering glutamate, making it ersatz glutamate, alters the redness quality, then, either way, you still know exactly what has and what does not have a redness quality. > >> Only when you replace the entire binding system, with a binding system that >> is interpreting zombie information representing redness, as if it was real >> redness, will it behave the same. So, it will be behaving the same, but the >> qualitative subjective nature of it's behavior will have completely faded, >> and be absent. The system only behaves the way it does, because it contains >> interpreting hardware that is properly interpreting the zombie information >> as if it was the real thing. This is a form of the vanishing qualia case >> David Chalmers predicts is possible, right? > Chalmers says that this would lead to a partial zombie, someone who is > blind but says he can see normally and behaves as if he can see > normally. He stops short of saying this is absurd, but I think if you > allow for the possibility of partial zombies the whole philosophical > edifice crumbles. > No, the prediction is that as long as you have not replaced the binding neuron, nothing you present to it, will ever say and know something has a redness quality, without real redness. In other words, without real glutamate, you will not be able to throw the switch, between the simluated glutamate, and the real thing, and reproduce the behavior saying the simulated glutamate is the same as the real thing. So you will never get to the next level of replacing the binding neuron, because duplicating the "that is real redness" behavior will not be possible. You will be able to skip that step, and replace the binding neuron which, by definition, has hardware translation that is interpreting the zombie information, as if it was real redness. But, obviously, you can only think of this as behaving as if it had real redness, because, by definition, the zombie information does not have it. Without the translation hardware, properly interpreting the zombie information, which does not have redness, as if it did, it will not reproduce the behavior. Brent From rafal.smigrodzki at gmail.com Mon Feb 2 04:35:38 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 1 Feb 2015 23:35:38 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CEB2B7.70804@canonizer.com> References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> Message-ID: On Sun, Feb 1, 2015 at 6:11 PM, Brent Allsop wrote: > > > This theory also predicts that this elemental fundamental stuff, in > addition to behaving according to these laws, also has fundamental > qualities, like redness. It predicts that particular qualities are one and > the same as particular behaviors. The prediction is that nature builds our > composite consciousness out of these elemental qualities. In the 3 color > world, the scientists don't know why glutamate has a redness quality, just > that it does. ### Sorry, this really fails the LOL test. Fundamental quality of redness in *glutamate*?? One might claim there are some irreducible properties of information processing that manifest as qualia, from simple redness to the feel of a quadratic function being plotted by a 3d-representing mind to the taste of madeleines in the remembering human but to say that low-level chemistry determines these properties is just silly. There is million chemical processes happening inside your mind that are clearly irrelevant to qualia, since they remain both subjectively and objectively undetected (i.e. have no measurable impact on behavior or measurable internal states that correlate with behavior). Replacing glutamate with another neurotransmitter in your mind while adjusting its receptors and enzymes as to make higher-level brain activity (EEG, rCBF responses, etc.) the same would have no impact on "qualia", just as replacing a 4004 system with an i7 system in a Pong arcade game, properly implemented, would not change the game itself. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsantostasi at gmail.com Mon Feb 2 04:43:30 2015 From: gsantostasi at gmail.com (Giovanni Santostasi) Date: Sun, 1 Feb 2015 22:43:30 -0600 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: We already have period of unconsciousness: sleep. It is not idle time. If there is a process that is not interesting to attend, like a long interstellar trip going to sleep is a possible solution as you noted. One could even update the mind software while doing that. Plus downloading information directly to the brain would not be unfeasible for an advanced civilization, so the problem of adapting to the future is not a problem at all (by the way there are Australian aborigine that got adapted quickly to modern life (differential of thousand of years from their previous condition) maybe with some social problem but their head didn't explode. Even most of an ET society maybe be lost in a VR world nothing stops them from launching self replicating probes that as noted by others would cost almost nothing for an advanced civilization. Even a bored ET kid could do that. As I said there is only one reasonable solution to the Fermi's paradox. We are the first and very likely only ones. Giovanni On Sun, Feb 1, 2015 at 1:51 PM, BillK wrote: > On 1 February 2015 at 18:37, John Clark wrote: > > ET doesn't need to travel to the stars, ET just needs to send one Von > > Neumann probe to one star, and then almost instantly from a cosmic > > perspective (less than 50 million years, perhaps much less) the entire > > Galaxy would be unrecognizable. And it's not as if this would take some > huge > > commitment on the part of ET's civilization, in fact even a individual > could > > easily do it. If Von Neumann probes are possible at all, and I can't > think > > why they wouldn't be, then they're going to be dirt cheap, you buying a > bag > > of peanuts would be a greater drag on your financial resources. > > > > I am having difficulty grasping the argument that the reason we can't see > > any changes that ET made to the universe with even our biggest > telescopes is > > because ET can make changes a million times faster than we can. > > > > > > It is because ET *thinks* a million times faster than us. But chemical > reactions still take the same time. If it takes a subjective 10,000 > years to do one spot-weld, then you are not going to do many. In > theory, robots could do the job, but building the robots takes too > long (subjective time). That's why ET probably retreats into virtual > reality that reacts at the same speed as their thinking. > > Humans are finding the same thing already. It is far easier (and > safer) to make a virtual reality SF world than actually build physical > stuff to go to Mars. (World of Warcraft?). > > As for voluntarily slowing down their processing, I think that is a > rather obvious non-idea. It would be like voluntarily 'dying' for > thousands of years. Humans could almost do that already. We can't stop > ageing yet, but you could travel into the future as soon a workable > hibernation technique is developed. (NASA are already looking at this > for Mars trips). But would there be many takers for this trip into the > future? A human from only 100 years ago would face considerable > problems re-educating themselves to the modern environment. They would > probably need a 'carer' to look after them while they tried to adjust. > As for thousands of years - forget it. You would never adjust. > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Mon Feb 2 04:45:26 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 01 Feb 2015 21:45:26 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> Message-ID: <54CF00E6.4020802@canonizer.com> Hi John, On 2/1/2015 6:47 PM, John Clark wrote: > And I still don't think you answered my question about consciousness > being fundamental. Do you think the series of all "why" continue for > infinity or do some of them eventually hit a brute fact? I think data > processed intelligently producing intelligently is just such a brute > fact. I guess I just don't understand what you are asking, then. Because the prediction is that the brute fact will be proven that something in our brain has a fundamental redness quality. > > > Everything behaves the same, at least until you ask them: "What > is red like for you". > > > And one of them would say "red is what gives black contrast, without > it vision would be useless", and the other one would say "I understand > completely, in my language we call that white". > I guess if you can't see all the obvious mistakes and confusion in these kinds of statements, and how "calling" something white (White is a piece of zombie information), has nothing to do with the quality being called "white", and how we are not talking about smoething being tagged, but the qualitative nature of the tag, itself, which enables you to know it is a redness tag (as apposed to a grenness tag) Then I am not sure what else I can say to help. Isn't all that matters is the following? The prediction is that we will be able to develop the ability to throw a switch turning on a new hack in your brain, and when we do this, you will for the first time, experience a new blue you have never experienced before. If we do this, you must accept that this theory about the qualitative nature of whatever it is that has a new blue quality, as correct, and that you will no longer be a new blue zombie? Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon Feb 2 04:55:58 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 1 Feb 2015 23:55:58 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> Message-ID: On Sun, Feb 1, 2015 at 8:47 PM, John Clark wrote: > The difference between red and white is meaningful to us because our eye > can register spots of light in 2 dimensions, intensity and wavelength, but > to those who can only do so in one dimension, like both the black-white and > black-red people, the difference would be meaningless. > ### Indeed, and this 2d processing matrix is actually essential to generating the perception of color. I don't know if anybody else mentioned it in this thread, but we know that color is not a property of the cones in the retina - in fact it is a construct produced predominantly in the lingual and fusiform gyri, and applied to the 2d and 3d renderings generated mainly in the dorsal stream (dorsal occipital and parietal cortical areas). One only needs to look at some of the color illusion images to realize that - color is not a property of light, but rather it's a complex quality computed from images in order to encode reflectances of various materials under different lighting conditions, from dawn to dusk. Reflectance is a property of materials and it provides information about the various important chemical properties of some of these materials (e.g. degree of ripeness of fruit, skin perfusion levels in a child or prospective mate - knowing them may have dramatic impact on your survival), and we evolved to have very sophisticated hardware dedicated to computing it. Shining a monochromatic light beam in your eye also produces a perception of color but this is just a side-effect of an information processing event that evolved to do something else than sitting in high-school physics experiments. As David Deutsch might say, everything we see is theory-laden, even a strawberry. Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Mon Feb 2 05:22:36 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 2 Feb 2015 00:22:36 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CEFA96.4010709@canonizer.com> References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> Message-ID: On Sun, Feb 1, 2015 at 11:18 PM, Brent Allsop wrote: If you ever discover any detectable property that produces redness, that you didn't know had a redness quality, before, your previous theory will have been falsified and you must then simply alter your sets of necessary and sufficient detectable properties, to include the new property. ### So you say the quality of redness is possessed by any physical object (whether glutamate or not glutamate) that produces the perception of redness. How is that not a circular argument? ----------------------- > > No, the prediction is that as long as you have not replaced the binding > neuron, nothing you present to it, will ever say and know something has a > redness quality, without real redness. In other words, without real > glutamate, you will not be able to throw the switch, between the simluated > glutamate, and the real thing, and reproduce the behavior saying the > simulated glutamate is the same as the real thing. ### Almost all neurons are binding neurons. The neurons that construct the perception of redness are in the V4 area, and respond the same both to physiological (reflectance) and certain non-physiological (monochromator) stimuli. Redness does not exist as a property below the V4 area. Most cortical neurons have glutamatergic synapses but only V4 neurons use glutamatergic transmission to construct the quale of redness. Glutamate is a transparent, easily crystallizable substance, and produces a pleasant taste when applied to umami receptors in the mouth. It has no "redness" quality. Rafa -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Mon Feb 2 05:25:42 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 01 Feb 2015 22:25:42 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> Message-ID: <54CF0A56.7040202@canonizer.com> Hi John, I can see how jumping into this conversation, new, the way you have, one would say LOL. But, my prediction is that if you read, and understand the paper, it will all make sense. glutamate is just the example theory, that only applies to the simplistic 3 color world, described in order to describe as simply as possible, how to get around the qualitative information gap. You obviosly know of experimental facts that falsify this prediction. And this is the point: an example of a theory about the qualitative nature of consciousness, that can be falsified. So, the only remaining task is to replace glutamate, with whatever are the necessary and sufficient detectable properties, for someone to experience a redness quality. And any such new theory will simply be a variation on this first incorrect theory that enables us to bridge the qualitative gap, and to eff the ineffable, in an experimentally verifiable to all way. On 2/1/2015 9:35 PM, Rafal Smigrodzki wrote: > > > On Sun, Feb 1, 2015 at 6:11 PM, Brent Allsop > > wrote: > > > > This theory also predicts that this elemental fundamental stuff, > in addition to behaving according to these laws, also has > fundamental qualities, like redness. It predicts that particular > qualities are one and the same as particular behaviors. The > prediction is that nature builds our composite consciousness out > of these elemental qualities. In the 3 color world, the > scientists don't know why glutamate has a redness quality, just > that it does. > > > ### Sorry, this really fails the LOL test. Fundamental quality of > redness in *glutamate*?? > > One might claim there are some irreducible properties of information > processing that manifest as qualia, from simple redness to the feel of > a quadratic function being plotted by a 3d-representing mind to the > taste of madeleines in the remembering human but to say that low-level > chemistry determines these properties is just silly. There is million > chemical processes happening inside your mind that are clearly > irrelevant to qualia, since they remain both subjectively and > objectively undetected (i.e. have no measurable impact on behavior or > measurable internal states that correlate with behavior). Replacing > glutamate with another neurotransmitter in your mind while adjusting > its receptors and enzymes as to make higher-level brain activity (EEG, > rCBF responses, etc.) the same would have no impact on "qualia", just > as replacing a 4004 system with an i7 system in a Pong arcade game, > properly implemented, would not change the game itself. > > Rafal > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 2 05:36:54 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 2 Feb 2015 16:36:54 +1100 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CEFA96.4010709@canonizer.com> References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> Message-ID: On 2 February 2015 at 15:18, Brent Allsop wrote: > > Hi Stathis, > > On 2/1/2015 5:18 PM, Stathis Papaioannou wrote: >> >> On 2 February 2015 at 10:44, Brent Allsop >> wrote: >>> >>> Hi Stathis, >>> >>> On 2/1/2015 5:52 AM, Stathis Papaioannou wrote: >>>> >>>> >>>> Aspects of consciousness, or if you prefer of qualia, can certainly be >>>> investigated scientifically, and a large part of neuroscience and >>>> psychology >>>> is devoted to doing just this. However, it is impossible to investigate >>>> scientifically if someone actually has qualia and what those qualia are >>>> like. >>>> >>> When I say you believe this is not approachable via science, I am talking >>> about the latter, which you clearly state is not approachable via >>> science. >>> In the latter you are making the falsifiable prediction that you cannot >>> eff >>> the ineffable. >>> >>>> >>>> If you claim to be able to detect qualia then what test do you propose >>>> to >>>> use to decide whether CMOS sensors have "an intrinsic qualitative >>>> nature" or >>>> not? >>>> >>> The prediction is that if CMOS's behavior is the same as some quality >>> (which >>> we have likely never experienced before) that we will be able to present >>> it >>> to our augmented binding system in a way that will enable us to compare >>> it's >>> quality to all the other qualities we have. Before we do this, we will >>> be >>> like Mary, and know everything about the behavior of CMOS. But once we >>> know >>> what our zombine information description of CMOS qualitatively >>> represents, >>> we will also know, qualitatively, what CMOS is like. >> >> Can you give an example of how you would go about this? > > One of many theoretical falsifiable possibilities, would be replace > glutamate in the synapse with CMOS. If you did this, and experienced the > new quality of CMOS, you would then for the first time, experience the new > qualia, and finally know, qualitatively, what CMOS was like. Then, like > zombie Mary, who before walking out of the room, knew everything about how > CMOS behaved, would finally know how to qualitatively interpret that zombie > information representing everything about the CMOS quality. And we/she > would finally no longer be CMOS zombies. It could be that CMOS sensors have qualia but you can't access them by interfacing with the brain, since the two systems are radically different. Conversely, if you stick a CMOS in your brain and experience different qualia that could just be due to the disruption of normal brain activity, and not evidence that the CMOS in a digital camera has qualia. >>>> It's not problematic imagining that the qualia would vanish if the >>>> substitution were made with parts lacking the redness quality. What is >>>> problematic - and the entire point of the experiment - is that the >>>> qualia >>>> would vanish **without either the subject or the experimenters noticing >>>> that >>>> anything had changed**. >>>> >>> That explains our miss communication, then. What I was trying to say, >>> and >>> what this says you missed, is that the testable theoretical prediction is >>> that you will not be able to get or experience redness without presenting >>> glutamate (or replace glutamate with whatever your favorite theory >>> predicts >>> is responsible for elemental redness) to the binding system of your mind. >> >> I understand this: we will assume that you need *real* glutamate to >> have the redness experience. So if we use ersatz glutamate, that >> functions just like real glutamate but isn't real glutamate, you >> should get normal behaviour but absent or different qualia. We could >> perhaps do this experiment by replacing normal glutamate with >> glutamate made from different isotopes such as C-14 and O-17. This >> ersatz glutamate will function chemically perfectly normally, but your >> claim is that normal function is not enough to reproduce the qualia, >> you need the actual substance. So what do you predict would happen if >> the natural glutamate were replaced with ersatz glutamate? > > > I don't understand why you are bringing this up. The prediction is that you > will be able to, through trial and error, find all possible necessary and > sufficient detectable properties that enable you to reliably predict when > someone is experiencing real redness, and when someone is not. If you ever > discover any detectable property that produces redness, that you didn't know > had a redness quality, before, your previous theory will have been falsified > and you must then simply alter your sets of necessary and sufficient > detectable properties, to include the new property. The same is true with > all other physics. Glutamate is well defined, you can make high quality > detectors of glutamate, that will only give a positive result, with real > glutamate, and nothing else. The falsifiable prediction is that detecting > real redness will be the same as detecting real glutamate. If ersatz > glutamate has a redness quality, then you include that in the set of > possible detectable properties. If altering glutamate, making it ersatz > glutamate, alters the redness quality, then, either way, you still know > exactly what has and what does not have a redness quality. But the claim of functionalism is that if the ersatz glutamate is chemically identical with the real glutamate, the qualia will be reproduced; and in general, if any component of the brain is replaced with a functionally identical component, the qualia will be reproduced. The qualia do not reside in any particular component of the brain or any type of matter, they are generated by a particular type of behaviour. Just as if you replace a joint with an artificial joint made of a totally foreign substance and have normal joint movement, so you can replace a part of the brain with an artificial part made of a totally foreign substance and have normal qualia. I was led to believe by your various writings that you do not agree with this - that you think it might work for joints and movement, but not for brains and qualia, where you would need the actual substrate, not just a functional equivalent. >>> Only when you replace the entire binding system, with a binding system >>> that >>> is interpreting zombie information representing redness, as if it was >>> real >>> redness, will it behave the same. So, it will be behaving the same, but >>> the >>> qualitative subjective nature of it's behavior will have completely >>> faded, >>> and be absent. The system only behaves the way it does, because it >>> contains >>> interpreting hardware that is properly interpreting the zombie >>> information >>> as if it was the real thing. This is a form of the vanishing qualia case >>> David Chalmers predicts is possible, right? >> >> Chalmers says that this would lead to a partial zombie, someone who is >> blind but says he can see normally and behaves as if he can see >> normally. He stops short of saying this is absurd, but I think if you >> allow for the possibility of partial zombies the whole philosophical >> edifice crumbles. >> > > No, the prediction is that as long as you have not replaced the binding > neuron, nothing you present to it, will ever say and know something has a > redness quality, without real redness. In other words, without real > glutamate, you will not be able to throw the switch, between the simluated > glutamate, and the real thing, and reproduce the behavior saying the > simulated glutamate is the same as the real thing. So you will never get > to the next level of replacing the binding neuron, because duplicating the > "that is real redness" behavior will not be possible. You will be able to > skip that step, and replace the binding neuron which, by definition, has > hardware translation that is interpreting the zombie information, as if it > was real redness. But, obviously, you can only think of this as behaving as > if it had real redness, because, by definition, the zombie information does > not have it. Without the translation hardware, properly interpreting the > zombie information, which does not have redness, as if it did, it will not > reproduce the behavior. You seem to forget that, whatever neurons or neuronal components might experience, they can only communicate with downstream neurons by the timing of their synaptic firing. If the timing of synaptic firing is unchanged, the downstream motor neurons' firings will be unchanged and the behaviour of the organism will be unchanged. And if the functional replacement for glutamate (or whatever other component) does not alter the sequence and firing of the neuron - which must be the case if it is "functionally equivalent" - then the behaviour of the organism will be unchanged. So the "that is real redness behaviour" will happen provided only that the replacement part is functionally equivalent. It is the function of the part, not the matter it is made out of, that determines this. If you agree that, given identical function, then the qualia must also be identical, then you are a functionalist. -- Stathis Papaioannou From protokol2020 at gmail.com Mon Feb 2 07:37:57 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Mon, 2 Feb 2015 08:37:57 +0100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: > We are the first and very likely only ones. It goes for at least our galaxy. But more likely, for the best part of the observable Universe. For our light cone, as some like to put it. Well, this is quite a surprise for almost everybody, brought up as an interstellar multiculturalist StarTrek style. But humans mixing with Vulcans, Klingons is as stupid, as the Prime directive they obeyed. Except for a few good ideas, ST is crappy place to learn from. On Mon, Feb 2, 2015 at 5:43 AM, Giovanni Santostasi wrote: > We already have period of unconsciousness: sleep. It is not idle time. > If there is a process that is not interesting to attend, like a long > interstellar trip going to sleep is a possible solution as you noted. One > could even update the mind software while doing that. Plus downloading > information directly to the brain would not be unfeasible for an advanced > civilization, so the problem of adapting to the future is not a problem at > all (by the way there are Australian aborigine that got adapted quickly to > modern life (differential of thousand of years from their previous > condition) maybe with some social problem but their head didn't explode. > > Even most of an ET society maybe be lost in a VR world nothing stops them > from launching self replicating probes that as noted by others would cost > almost nothing for an advanced civilization. Even a bored ET kid could do > that. > > As I said there is only one reasonable solution to the Fermi's paradox. We > are the first and very likely only ones. > > Giovanni > > > On Sun, Feb 1, 2015 at 1:51 PM, BillK wrote: > >> On 1 February 2015 at 18:37, John Clark wrote: >> > ET doesn't need to travel to the stars, ET just needs to send one Von >> > Neumann probe to one star, and then almost instantly from a cosmic >> > perspective (less than 50 million years, perhaps much less) the entire >> > Galaxy would be unrecognizable. And it's not as if this would take some >> huge >> > commitment on the part of ET's civilization, in fact even a individual >> could >> > easily do it. If Von Neumann probes are possible at all, and I can't >> think >> > why they wouldn't be, then they're going to be dirt cheap, you buying a >> bag >> > of peanuts would be a greater drag on your financial resources. >> > >> > I am having difficulty grasping the argument that the reason we can't >> see >> > any changes that ET made to the universe with even our biggest >> telescopes is >> > because ET can make changes a million times faster than we can. >> > >> > >> >> It is because ET *thinks* a million times faster than us. But chemical >> reactions still take the same time. If it takes a subjective 10,000 >> years to do one spot-weld, then you are not going to do many. In >> theory, robots could do the job, but building the robots takes too >> long (subjective time). That's why ET probably retreats into virtual >> reality that reacts at the same speed as their thinking. >> >> Humans are finding the same thing already. It is far easier (and >> safer) to make a virtual reality SF world than actually build physical >> stuff to go to Mars. (World of Warcraft?). >> >> As for voluntarily slowing down their processing, I think that is a >> rather obvious non-idea. It would be like voluntarily 'dying' for >> thousands of years. Humans could almost do that already. We can't stop >> ageing yet, but you could travel into the future as soon a workable >> hibernation technique is developed. (NASA are already looking at this >> for Mars trips). But would there be many takers for this trip into the >> future? A human from only 100 years ago would face considerable >> problems re-educating themselves to the modern environment. They would >> probably need a 'carer' to look after them while they tried to adjust. >> As for thousands of years - forget it. You would never adjust. >> >> BillK >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Feb 2 10:35:06 2015 From: anders at aleph.se (Anders Sandberg) Date: Mon, 2 Feb 2015 11:35:06 +0100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: Message-ID: <1238828622-2826@secure.ericade.net> Here is a part from a paper I am working on: "Another relevant property of a civilization is its temporal discounting. How much is the far future worth relative to the present? There are several reasons to suspect advanced civilizations have very long time horizons. In a dangerous or uncertain environment it is rational to rapidly discount the value of a future good since survival to that point is not guaranteed. However, mature expanding civilizations have likely reduced their existential risks to a minimum level and would have little reason to discount strongly (individual members, if short-lived, may of course have high discount rates). More generally, the uncertainty of the future will be lower and this also implies lower discount rates.? It also appears likely that a sufficiently advanced civilization could regulate its ``mental speed'', either by existing as software running on hardware with a variable clock speed, or by simply hibernating in a stable state for a period. If this is true, then the value of something after a period of pause/hibernation would be determined not by the chronological external time, but how much time the civilization would subjectively experience in waiting for it. Changes in mental speed can hence make temporally remote goods more valuable if the observer can pause until they become available and there is no alternative cost for other goods. This is linked to a reduction of opportunity costs: advanced civilizations have mainly ``seen it all'' in the present universe and do not gain much more information utility from hanging around in the early era\footnote{Some exploration, automated or ``manned'', might of course still occur during the aestivation period, to be reported to the main part of the civilization at its end.} There are also arguments that future goods should not be discounted in cases like this. What really counts is fundamental goods (such as well-being or value) rather than commodities; while discounting prices of commodities makes economic sense it may not make sense to discount value itself \cite{Broome}. ? This is why even a civilization with some temporal discounting can find it rational to pause in order to gain a huge reward in the far future. If the subjective experience is an instant astronomical multiplication of goods (with little risk) it is rational to make the jump. " This is used in a larger argument that advanced civilizations might aestivate until a late cosmological era, in order to use the low temperature to do more computation. This in itself does not solve Fermi, but I also argue that in this scenario there would have to be caretaker systems to monitor things and protect resources, and these may have good reasons to be 'quiet'. The above reasoning suggests that even very fast civilizations may do long-term stuff, even if the aestivation idea itself is wrong.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Feb 2 10:39:10 2015 From: anders at aleph.se (Anders Sandberg) Date: Mon, 2 Feb 2015 11:39:10 +0100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: Message-ID: <1239217581-29491@secure.ericade.net> Kelly Anderson??, 1/2/2015 8:28 PM: If your goal includes living a long perceived life, then slowing down would be counterproductive. Besides, you can simulate a lot during that kind of time period if you have a portable energy source. So Giovanni, I see a reason to speed up, but slightly less reason to slow down. In about 1.2 trillion years the background radiation will reach 10^-30 K and then stay there due to horizon radiation.? Since the cost of computation scales as kTln(2), that means that the same energy endowment can give you 10^30 times more computation if you save it till then than if you used it now.? If you only care about the absolute amount of computation you can do (for example a 'solipsist' virtual civilization wanting to max its own subjective time of survival and internal richness), then it makes sense to pause until the late era. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Feb 2 10:46:24 2015 From: anders at aleph.se (Anders Sandberg) Date: Mon, 2 Feb 2015 11:46:24 +0100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: Message-ID: <1239453602-2826@secure.ericade.net> John Clark , 1/2/2015 7:46 PM: On Sun, Feb 1, 2015 at 9:00 AM, Anders Sandberg wrote: > Cultural convergence that gets *everybody*, whether humans, oddly programmed AGIs, silicon-based zorgons, the plant-women of Canopus III, or the sentient neutronium vortices of Geminga, that has to be something really *weird*.? Yes but do you thing the?confluence of?positive?feedback loops and?intelligence?might produce effects that are weird enough???I hope not but that's my fear. They need to be very weird. They need to strike well before the point humanity can make a self-replicating von Neumann probe (since it can be given a simple non-changing paperclipper AI and sent off on its merry way, breaking the Fermi silence once and for all) - if they didn't, they are not strong enough to work as a Fermi explanation. So either there is a very low technology ceiling, or we should see these feedbacks acting now or in the very near future, since I doubt the probe is more than a ?century ahead of us in tech capability.? Intelligence doesn't seem to lead to convergence in our civilization: smart people generally do not agree or think alike (despite the Aumann theorem), optimization and globalization doesn't make humanity converge that strongly.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Feb 2 10:54:38 2015 From: anders at aleph.se (Anders Sandberg) Date: Mon, 2 Feb 2015 11:54:38 +0100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: Message-ID: <1239881749-4530@secure.ericade.net> Giovanni Santostasi , 1/2/2015 7:39 PM: I know it sounds difficult to swallow. But there is only one logical solution to the Fermi's Paradox. We are the first "advanced" civilization in the galaxy if not the entire visible universe.? No, that is one *possible* explanation. There are a lot of logically consistent or even practically possible explanations (think of the low tech ceiling hypothesis, the zoo hypothesis, the convergence hypothesis, the xrisk hypothesis...). The home alone answer might be a likely-looking one, but if it was the only logical one the others would have clear inconsistencies. People are *way* to certain about their favourite Fermi answers. Given that this is a question about the dynamics of intelligence in the universe at large, a domain we have nearly no data in and no proper theory of how to investigate properly, we should be epistemically humble.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Feb 2 11:14:31 2015 From: pharos at gmail.com (BillK) Date: Mon, 2 Feb 2015 11:14:31 +0000 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: <1238828622-2826@secure.ericade.net> References: <1238828622-2826@secure.ericade.net> Message-ID: On 2 February 2015 at 10:35, Anders Sandberg wrote: > This is linked to a reduction of opportunity costs: advanced civilizations > have mainly ``seen it all'' in the present universe and do not gain much > more information utility from hanging around in the early era\footnote{Some > exploration, automated or ``manned'', might of course still occur during the > aestivation period, to be reported to the main part of the civilization at > its end.} > > This is why even a civilization with some temporal discounting can find it > rational to pause in order to gain a huge reward in the far future. If the > subjective experience is an instant astronomical multiplication of goods > (with little risk) it is rational to make the jump. " > > This is used in a larger argument that advanced civilizations might > aestivate until a late cosmological era, in order to use the low temperature > to do more computation. This in itself does not solve Fermi, but I also > argue that in this scenario there would have to be caretaker systems to > monitor things and protect resources, and these may have good reasons to be > 'quiet'. > > The above reasoning suggests that even very fast civilizations may do > long-term stuff, even if the aestivation idea itself is wrong. > > The problem is that the bird in the hand is worth more. Remember it is exponential growth. Assuming a fast-thinker civ living in a created, ever-changing virtual reality, then the idea of voluntarily shutting down for trillions of years (even more in subjective time) until the end time of the universe appears very unlikely. I like Rafal's suggestion that to outsiders fast-thinker civs would appear to be dead and unresponsive. That's exactly how the outside universe would appear to them. The evidence we have is that if ETs exist, then they go somewhere else. We are in the very early phase of developing virtual reality worlds, but the possibilities look unlimited. Add in a bit of AI and nano-tech and we're there! BillK From brent.allsop at canonizer.com Mon Feb 2 12:40:33 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Mon, 02 Feb 2015 05:40:33 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> Message-ID: <54CF7041.4090904@canonizer.com> Hi Stathis, On 2/1/2015 10:36 PM, Stathis Papaioannou wrote: > It could be that CMOS sensors have qualia but you can't access them by > interfacing with the brain, since the two systems are radically > different. Conversely, if you stick a CMOS in your brain and > experience different qualia that could just be due to the disruption > of normal brain activity, and not evidence that the CMOS in a digital > camera has qualia. I completely agree, yes. But you are missing the more important point which is, if this theory can be falsified, as you are doing, you simply need to come up with a variation on the theory, till you get one the experimental science effingly proves is the one. It has a large part to do with both what has the quality, and how these qualities interact with the binding mechanism. It could be that only neurons and, and neurotransmitters can have and bind together qualitative properties. And that we will never find a way to integrate stuff like CMOS, qualitatively, directly, as you predict. But, again, what matters, is the general framework where you can determine what does, and what does not have qualia, and it is not different than the rest of science, as long as you include this qualitative information that is intrinsic, and not just zombie information being interpreted as if it was the real thing. > You seem to forget that, whatever neurons or neuronal components might > experience, they can only communicate with downstream neurons by the > timing of their synaptic firing. If the timing of synaptic firing is > unchanged, the downstream motor neurons' firings will be unchanged and > the behaviour of the organism will be unchanged. And if the functional > replacement for glutamate (or whatever other component) does not alter > the sequence and firing of the neuron - which must be the case if it > is "functionally equivalent" - then the behaviour of the organism will > be unchanged. So the "that is real redness behaviour" will happen > provided only that the replacement part is functionally equivalent. But you are leaving out the binding system. The only way what you are claiming could be true is if there is no theoretically possible way to do what you are assuming can't be done. But there are many theoretical possibilities which could bind, effingly, multiple qualities, so you are aware of redness and greenness, at the same time, and that you can know the difference because of this qualitative binding. The neural wave theories that Steven Lehar talks about are just one example of many possible theories that can bind together waves, over areas of time and space, which falsifies what you are claiming, that you cannot bind together things like this, to be aware of both of them, qualitatively, at the same time. > It is the function of the part, not the matter it is made out of, that > determines this. If you agree that, given identical function, then the > qualia must also be identical, then you are a functionalist. Again, I completely agree with this. But the higher level testability, qualitatively, still applies. When you are aware of redness, there must be something, that is detectably causing you to be aware of this. And there must be something that is detectably different than this, which is reliably responsible for us being aware of greenness. Since no functionalist, ever gives any possible way to reliably detect this difference, even if it is an obviously falsifiable difference, it makes it hard to use functionalist theories to talk about the more important part of how to falsifiably detect what is, and is not responsible for redness. All you need to do is replace glutamate, with whatever you could possibly predict is detectably responsible for redness. I've tried to find any possible way to detect, functionally, what is responsible for redness, and how this could be different than some functionalist greenness, but I can't, without it being circular, and not defined anywhere. If you can provide any way to test for this, I will be glad to include the predictions you provide, along with the different predictions being included with materialist theories, so we can both leave it to scientific demonstration to prove, which is right. But till you can provide how to test for what you are claiming, there isn't much I can do. Seems to me you must admit that if a materialist theory, works, and you can use waves, or anything else to bind them together, as you predict can't be done, you must admit that functionality would be scientifically proven wrong. And the same is true for detectable functionalism. But, we must first find, either theoretically or experimentally, some reasonable testable theory which is detectably responsible for us experiencing a redness quality. Brent From brent.allsop at canonizer.com Mon Feb 2 12:56:48 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Mon, 02 Feb 2015 05:56:48 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> Message-ID: <54CF7410.8080409@canonizer.com> On 2/1/2015 10:22 PM, Rafal Smigrodzki wrote: > > > On Sun, Feb 1, 2015 at 11:18 PM, Brent Allsop > > wrote: > > If you ever discover any detectable property that produces redness, > that you didn't know had a redness quality, before, your previous > theory will have been falsified and you must then simply alter your > sets of necessary and sufficient detectable properties, to include the > new property. > > ### So you say the quality of redness is possessed by any physical > object (whether glutamate or not glutamate) that produces the > perception of redness. > > How is that not a circular argument? The prediction is that there is some set of detectable properties responsible for us geing aware of a greness quality, and that there is another set that can be distinguished from this, that is responsible for greness. Which one of these sciences proves is and isn't redness defines it, and makes it not circular. If you are a functionalist, as Stathis is, it does seem circular. But if you can define any theoretically possible way to reliably detect what is "functionally" responsible (even if that is a falsifiable way, which can just be altered, till you get it right) then you will have the grounding real definition, which makes it no longer circular. > > ----------------------- > > > No, the prediction is that as long as you have not replaced the > binding neuron, nothing you present to it, will ever say and know > something has a redness quality, without real redness. In other > words, without real glutamate, you will not be able to throw the > switch, between the simluated glutamate, and the real thing, and > reproduce the behavior saying the simulated glutamate is the same > as the real thing. > > > ### Almost all neurons are binding neurons. The neurons that construct > the perception of redness are in the V4 area, and respond the same > both to physiological (reflectance) and certain non-physiological > (monochromator) stimuli. Redness does not exist as a property below > the V4 area. Most cortical neurons have glutamatergic synapses but > only V4 neurons use glutamatergic transmission to construct the quale > of redness. > > Glutamate is a transparent, easily crystallizable substance, and > produces a pleasant taste when applied to umami receptors in the > mouth. It has no "redness" quality. This is great that you know so much about glutamate, neurons, and all this stuff. It's fun to talk to smart people like this. But you need to read the paper, and understand the "quale interpretation problem" which explains exactly this. In it's crystal form, glutamate reflects white light. But if we represent this with something that is not glutamate, and has a whiteness quality, this will simply be miss interpreting the qualitative nature of redness. If you interperet glutamate, as not having any quale, as our knowledge of transparent glutamate might make it seem, again, thinking there is not qualitative property there, simply because of the qualitative nature (or lack there of) of our knowledge, is exactly the quale interpretation problem. Here is the link to the working draft of the paper for anyone that does not have it yet: https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit Brent Allsop -------------- next part -------------- An HTML attachment was scrubbed... URL: From connor_flexman at brown.edu Mon Feb 2 13:51:07 2015 From: connor_flexman at brown.edu (Flexman, Connor) Date: Mon, 2 Feb 2015 08:51:07 -0500 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: <1238828622-2826@secure.ericade.net> Message-ID: On Mon, Feb 2, 2015 at 6:14 AM, BillK wrote: > > The problem is that the bird in the hand is worth more. Remember it is > exponential growth. Assuming a fast-thinker civ living in a created, > ever-changing virtual reality, then the idea of voluntarily shutting > down for trillions of years (even more in subjective time) until the > end time of the universe appears very unlikely. Anders takes this into account (thank you, Anders, for nailing everything on this one). The bird in the hand is just a specific rule we have for temporal discounting because we're still in a civilization where death occurs and accidents happen frequently. The argument isn't that they would shut down for trillions of years or until the end time, that's absurd. Of course they're going to keep spending most of their time alive. The idea is that they might rest for a few years while fast probes reach where they want to go, or while other subjectively slow processes are inhibited by the speed of light. You even make the point yourself that they could immerse themselves in virtual reality instead of shutting down, a simple explanation making long-term goals even more attainable Connor -- Non est salvatori salvator, neque defensori dominus, nec pater nec mater, nihil supernum. -------------- next part -------------- An HTML attachment was scrubbed... URL: From clausb at gmail.com Sun Feb 1 13:19:14 2015 From: clausb at gmail.com (Claus Bornich) Date: Sun, 1 Feb 2015 14:19:14 +0100 Subject: [ExI] taxonomy for fermi paradox fans Message-ID: On Sat, 31 Jan 2015 Keith Henson wrote: >It looks possible to do a million to one speedup, >so as a first pass guess, assume that. > If the population moves into a fast simulated environment, the > subjective time to get to the stars becomes even more ridiculous than > it is now. It's a local version of inflation. A single calendar year > becomes a million years subjective. Very nicely put. I would imagine the solution (then as now) is to send of copies of yourself that will awaken on arrival. By the time you arrive your civilization will no longer be one that you recognise (if it even exists), but then again you would likely not be able to communicate with it at such distances anyway except with one-way messages. Why go? Well, why not assuming you have the resources. That is a big assumption of course, but then again a fast thinker society might not have time to burn a significant amount of resources. By going you are effectively creating a backup - in fact why not take a snapshot of the entire civilization and send out copies at various stages (its not like you really need to worry about their return as that will be billions of years in your subjective future). Aside from simple survival, there is the urge to explore and seek new knowledge outside the system. One of the better hard SF books on these subject is from Greg Egan's Diaspora (https://www.goodreads.com/book/show/156785.Diaspora). Without a doubt the most ambitiously epic story I have read in scope of time and civilization and one of the most beautiful accounts of the "birth" of an artificial intelligence. Claus -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 2 15:51:21 2015 From: spike66 at att.net (spike) Date: Mon, 2 Feb 2015 07:51:21 -0800 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: <1239881749-4530@secure.ericade.net> References: <1239881749-4530@secure.ericade.net> Message-ID: <023001d03f00$17d4b430$477e1c90$@att.net> Giovanni Santostasi , 1/2/2015 7:39 PM: >?I know it sounds difficult to swallow. But there is only one logical solution to the Fermi's Paradox. We are the first "advanced" civilization in the galaxy if not the entire visible universe? Giovanni Giovanni that is perhaps a leading solution, but it is too presumptuous by an order of magnitude to say it is the only logical solution. At this point I would put it second behind the simulation solution: the Fermi silence is intentional, designed to keep us avatars astonished and pondering where and when we are located in the universe. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Feb 2 17:22:36 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 2 Feb 2015 12:22:36 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CF00E6.4020802@canonizer.com> References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> <54CF00E6.4020802@canonizer.com> Message-ID: Hi Brent >> And I still don't think you answered my question about consciousness >> being fundamental. > > > > I guess I just don't understand what you are asking, then. > I'm saying that eventually you will get to "consciousness is the way data feels like when it is being processed", and after that it would be pointless to ask how data actually does it because "consciousness is the way data feels like when it is being processed" is the end of that chain of "how" questions, that chain is not infinitely long but eventually reaches a brute fact and terminates, eventually it reaches something fundamental. > > Because the prediction is that the brute fact will be proven that > something in our brain has a fundamental redness quality. > I would agree with that except I'd substitute "mind" for "brain" because mind is what a brain does, and I know with a certainty far beyond any need of proof that mind, or at least one mind, can produce redness. > >>> Everything behaves the same, at least until you ask them: "What is red >>> like for you". >> >> >> >> And one of them would say "red is what gives black contrast, without >> it vision would be useless", and the other one would say "I understand >> completely, in my language we call that white". > > > > I guess if you can't see all the obvious mistakes and confusion in > these kinds of statements, and how "calling" something white (White is a > piece of zombie information), has nothing to do with the quality being > called "white", > I'm not confused, I understand exactly what you're saying, I'm just saying you're wrong. In another post you talked about putting on red and green inverting glasses and you said and I agreed, that things would look very strange when I first put them on because they contradicted previous memories. Then you said, and I agreed, that after a period of time my mind would reassign all those huge number of memory associations and nested links so that they were no longer contradictory and things would look normal again. However you then said "but you will know that your knowledge is very qualitatively different than before you put on those glasses". If all the links have been rearranged where was that knowledge stored? If I followed the nerves in your tongue that make it create the noise "things looked different in the past" where did they originate? It can't be anyplace in the brain as all those links have been changed, so your only option is to invoke the soul. As for me I don't believe the tongue would make that noise because I don't believe in the soul, although I do believe that information is as close as you can get to the traditional idea of the soul and still stay within the scientific method. Well... actually to tell the truth I am a bit confused, I'm still not clear what "zombie red" is. And if I receive a sequence of binary digits how can I determine if it's zombie information or non-zombie information? > > we are not talking about smoething being tagged, but the qualitative > nature of the tag, > I understand that also, you're interested in the tag itself and I am too, it's a very interesting tag. And I'm saying that every single objective property of that tag and, much more important, every single SUBJECTIVE property of that tag comes ONLY from it's association with previous memories and links. Redness IS the memories and links. > Isn't all that matters is the following? The prediction is that we will > be able to develop the ability to throw a switch turning on a new hack in > your brain, and when we do this, you will for the first time, experience a > new blue you have never experienced before. > Humans can see about 10 million colors, beyond that 2 colors are too similar for us to tell apart. If a transhuman wished to have better color discrimination he would need to see more than 10 million colors, and so obviously he would subjectively experience more colors than we do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Feb 2 18:41:23 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 2 Feb 2015 13:41:23 -0500 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: <1239453602-2826@secure.ericade.net> References: <1239453602-2826@secure.ericade.net> Message-ID: On Mon, Feb 2, 2015 Anders Sandberg wrote: > > Yes but do you think the confluence of positive feedback loops and >> intelligence might produce effects that are weird enough? I hope not but >> that's my fear. > > > > They need to be very weird. They need to strike well before the point > humanity can make a self-replicating von Neumann probe (since it can be > given a simple non-changing paperclipper AI and sent off on its merry way, > breaking the Fermi silence once and for all) - if they didn't, they are not > strong enough to work as a Fermi explanation. So either there is a very low > technology ceiling, or we should see these feedbacks acting now or in the > very near future, > Could drug addiction be the first signs of that very dangerous positive feedback loop? During most of human existence it was a nonissue, but then about 8000 BC alcoholic beverages were invented, but they were so dilute you'd really have to work at it to get into trouble. Then about 500 years ago distilled alcoholic beverages were invented and it became much easier to become a alcoholic. Today we have many drugs that are far more powerful than alcohol. What happens if this trend continues exponentially? I doubt the [von Neumann] probe is more than a century ahead of us in tech > capability. I agree > > Intelligence doesn't seem to lead to convergence in our civilization: > smart people generally do not agree or think alike Yes, so even if many or even most ETs think that sending out a von Neumann probe would be a bad idea there will always be somebody who disagrees. And it only takes one. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Mon Feb 2 19:56:08 2015 From: bbenzai at yahoo.com (Ben) Date: Mon, 02 Feb 2015 19:56:08 +0000 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at, 2015 MTA conference In-Reply-To: References: Message-ID: <54CFD658.4040205@yahoo.com> Brent Allsop writes: > "I think there is elemental fundamental stuff in nature, and that this behaves in fundamental ways. We call this the laws of nature. For example, we know that mass, because of gravity, attracts other mass. We don't know why it does, just that it does. And this knowledge enables us to dance in the heavens. > This theory also predicts that this elemental fundamental stuff, in addition to behaving according to these laws, also has fundamental qualities, like redness." Ridiculous. 'Redness' is *absolutely not* a fundamental quality, any more than beauty or weirdness or jealousy is. How many 'fundamental qualities' do you think there are? What about cantankerousness? Happiness? Warmth? The sensation of shivering? The feel of wool? Dread? Dizziness? The sound of high C played on a violin? Sweetness? That sensation you get when you pick up a mug of coffee that you think is full and it's actually empty? Laughing? The smell of mint? And on and on and on... If you're going to use the word 'fundamental', it has to actually /mean/ fundamental. You know, as in foundational, sitting at the bottom, something that other things are made from. Everything can't be fundamental, which is basically what follows if you claim that something like 'redness' is (unless you think that 'redness' is, but the smell of mint isn't. I'd like to hear an explanation for that!). Far from being fundamental, 'redness' is pretty high-level, a complex interaction of a whole bunch of things going on in the brain. It's not a property of the universe, it's an experience manufactured by the mind. As far as I know, and as far as physics is concerned (to my knowledge) there are only five (or three, depending how you look at it) *fundamental* things: Space/Time, Matter/Energy, and Information. EVERYTHING is made of these things. And I really do mean everything, including minds and what goes on in them. That's what 'fundamental' means, and things like subjective experience of colour comes nowhere near this level. Talking about colour perception as being fundamental is like confusing ocean liners with quarks, or like those old science-fiction stories where someone discovers that planetary systems and atoms are /exactly the same!!!/. Quaint, entertaining, but utterly, disastrously, wrong. Ben Zaiboc From brent.allsop at canonizer.com Mon Feb 2 20:20:50 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Mon, 2 Feb 2015 13:20:50 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> <54CF00E6.4020802@canonizer.com> Message-ID: Hi John, On Mon, Feb 2, 2015 at 10:22 AM, John Clark wrote: > > > Isn't all that matters is the following? The prediction is that we will >> be able to develop the ability to throw a switch turning on a new hack in >> your brain, and when we do this, you will for the first time, experience a >> new blue you have never experienced before. >> > > Humans can see about 10 million colors, beyond that 2 colors are too > similar for us to tell apart. If a transhuman wished to have better color > discrimination he would need to see more than 10 million colors, and so > obviously he would subjectively experience more colors than we do. > > > You forgot to include: "and yes, once I experience these new colors, as Brent's theory predicts, much of my thinking would be falsified. Including my surprize that I can experience this new blue in a dark room with no light at all. boy, it was a complete waste of time to ever talk about light, no wonder Brent got so upset when I only ever talked about light, even though it had nothing to do with this new experience and it's qualitative nature." Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Feb 2 22:19:13 2015 From: anders at aleph.se (Anders Sandberg) Date: Mon, 2 Feb 2015 23:19:13 +0100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: Message-ID: <1281132802-22437@secure.ericade.net> John Clark , 2/2/2015 7:44 PM: On Mon, Feb 2, 2015 ?Anders Sandberg wrote: > Yes but do you think the confluence of positive feedback loops and intelligence might produce effects that are weird enough?? I hope not but that's my fear. > They need to be very weird. They need to strike well before the point humanity can make a self-replicating von Neumann probe (since it can be given a simple non-changing paperclipper AI and sent off on its merry way, breaking the Fermi silence once and for all) - if they didn't, they are not strong enough to work as a Fermi explanation. So either there is a very low technology ceiling, or we should see these feedbacks acting now or in the very near future, Could drug addiction be the first signs of that very dangerous positive feedback loop? During most of human existence it was a nonissue, but then about 8000 BC alcoholic beverages were invented, but they were so dilute you'd really have to work at it to get into trouble. Then about 500 years ago distilled alcoholic beverages were invented and it became much easier to become a alcoholic. Today we have many drugs that are far more powerful than alcohol. What happens if this trend continues exponentially? ? You need to assume drugs that are so addictive that *nobody*, not even people who can see them coming, can escape them. And that this works for *all* aliens, whether neutronium vortices or AI. Given that we know that even crack cocaine or nicotine doesn't hook the majority of people who try it (!), this requires some pretty wild extrapolation of the trend.? It is the "it only takes one" aspect that makes the cultural convergence hypothesis so weak.? (Still, addictions or their wireheading generalisations likely are big risks for civilizations emerging into automorphism - we spend a lot of effort to make cooking, games, drugs and other entertainment as appealing as possible, so it is not surprising we often end up addicted or at least overindulging. It might be that nearly every civilization struggles with its own version of obesity...) Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Feb 2 22:33:06 2015 From: anders at aleph.se (Anders Sandberg) Date: Mon, 2 Feb 2015 23:33:06 +0100 Subject: [ExI] Energy requirements for the singularity Message-ID: <1282211517-5098@secure.ericade.net> Just some calculations:?http://aleph.se/andart2/megascale/energy-requirements-of-the-singularity/ Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 2 22:34:59 2015 From: spike66 at att.net (spike) Date: Mon, 2 Feb 2015 14:34:59 -0800 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: <1281132802-22437@secure.ericade.net> References: <1281132802-22437@secure.ericade.net> Message-ID: <041301d03f38$7ab25390$7016fab0$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Anders Sandberg >?Could drug addiction be the first signs of that very dangerous positive feedback loop? During most of human existence it was a nonissue, but then about 8000 BC alcoholic beverages were invented, but they were so dilute you'd really have to work at it to get into trouble. Then about 500 years ago distilled alcoholic beverages were invented and it became much easier to become a alcoholic. Today we have many drugs that are far more powerful than alcohol. What happens if this trend continues exponentially? ?It might be that nearly every civilization struggles with its own version of obesity...) Anders Sandberg Ja. There is plenty of evidence that addiction is a huge challenge to any tech-enabled society or species. Consider how food has evolved in just the last century. We have learned to make food which is so good and is so tuned to our evolved appetites, it gets ever harder to resist. Furthermore it is a one-way street: food evolves. There is little chance we can return as a species to the specialty diets like paleo for instance. Some will choose that, most will not; we see the results. It is easy to just say no to drugs (and I do recommend that.) But with food, JSN is not an option. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrivera at alumni.virginia.edu Tue Feb 3 00:44:47 2015 From: hrivera at alumni.virginia.edu (Henry Rivera) Date: Mon, 2 Feb 2015 19:44:47 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <74442793-0CB7-4B79-A8B1-68081B1C7C17@alumni.virginia.edu> Message-ID: On Sun, Feb 1, 2015 at 11:16 PM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > On Sun, Feb 1, 2015 at 11:21 AM, Henry Rivera > wrote: > >> However, I want to point out that Orch OR is probably the most scientific >> model proposed in philosophy of mind. Its two founders are scientists, I am >> a scientist, and Orch OR offers 20 _testable_ predictions to assess its >> validity published in 1998, of which six are confirmed and none refuted >> last time I checked. > > > ## What are they? > > I find the theory fails the LOL test but I could be wrong. Tell me about > non-obvious confirmed biophysical predictions and I might stop lol-ing. > >From S. Hameroff, R. Penrose. Consciousness in the universe. A review of the ?Orch OR? theory. Phys Life Rev, 11 (1) (2014), pp. 39?78 available at http://www.sciencedirect.com/science/article/pii/S1571064513001905 ?5.7. Testable predictions of Orch OR ? current status Orch OR involves numerous fairly specific and essentially falsifiable hypotheses. In 1998 twenty testable predictions of Orch OR in 9 general categories were published [15]. They are reviewed here with our comments on their current status in -italics. Neuronal microtubules are directly necessary for cognition and consciousness 1. Synaptic plasticity correlates with cytoskeletal architecture/activities. -The current status of this is unclear, although microtubule networks do appear to define and regulate synapses. 2. Actions of psychoactive drugs, including antidepressants, involve neuronal microtubules. -This indeed appears to be the case. Fluoxitene (Prozac) acts through microtubules [167]; anesthetics also act through MTs [86]. 3. Neuronal microtubule stabilizing/protecting drugs may prove useful in Alzheimer?s disease. -There is now some evidence that this may be so; for example, MT-stabilizer epithilone is being tested in this way [168]. Microtubules communicate by cooperative dynamics 4. Coherent gigahertz excitations will be found in microtubules. -Indeed; in some remarkable new research, Anirban Bandyopadhyay?s group has found coherent gigahertz, megahertz and kilohertz excitations in single MTs [88,89]. 5. Dynamic microtubule vibrations correlate with cellular activity. -Evidence on this issue is not yet clear, although mechanical megahertz vibrations (ultrasound) do appear to stimulate neurons and enhance mood [127]. 6. Stable microtubule patterns correlate with memory. -The evidence concerning memory encoding in MTs remains unclear, though synaptic messengers CaMKII and PkMz do act through MTs. Each CaMKII may encode (by phosphorylation) 6 information bits to 6 tubulins in a microtubule lattice. 7. ?EPR-like? non-local correlation between separated microtubules. -This is not at all clear, but such things are very hard to establish (or refute) experimentally. Bandyopadhyay?s group is testing for ?wireless? resonance transfer between separated MTs [142]. Quantum coherence occurs in microtubules 8. Phases of quantum coherence will be detected in microtubules. -There appears to be some striking evidence for effects of this general nature in Bandyopadhyay?s recent results [88,89], differing hugely from classical expectations, where electrical resistance drops dramatically, at certain very specific frequencies, in a largely-temperature independent and length-independent way. 9. Cortical dendrites contain largely ?A-lattice?, compared to B-lattice, microtubules. -Although there is some contrary evidence to this assertion, the actual situation remains unclear. Orch OR has been criticized because mouse brain microtubules are predominantly B lattice MTs. However these same mouse brain MTs are partially A-lattice configuration, and other research shows mixed A and B lattice MTs [156?158]. Bandyopadhyay has preliminary evidence that MTs can shift between A- and B-lattice configurations [142], and A-lattices may be specific for quantum processes. Orch OR could also utilize B lattices, although apparently not as efficiently as A-lattice. In any case, A-lattice MTs could well be fairly rare, specific for quantum effects, and sufficient for Orch OR since the A-lattice may be needed only in a fraction of MTs in dendrites and soma, and perhaps only transiently. 10. Coherent photons will be detected from microtubules. -A positive piece of evidence in this direction is the detection of gigahertz excitations in MTs by Bandyopadhyay?s group, which may be interpreted as coherent photons [88,89]. Microtubule quantum coherence is protected by actin gelation 11. Dendritic?somatic microtubules are intermittently surrounded by tight actin gel. -This is perhaps a moot point, now, in view of recent results by Bandyopadhyay?s group, as it now appears that coherence occurs at warm temperature without actin gel. 12. Cycles of actin gelation and solution correlate with electrophysiology, e.g. gamma synchrony EEG. -Again, this now appears to be a moot point, for the same reason as above. 13. Sol?gel cycles are mediated by calcium ion flux from synaptic inputs. -No clear evidence, but again a moot point. Macroscopic quantum coherence occurs among hundreds of thousands of neurons and glia inter-connected by gap junctions -Gap junctions between glia and neurons have not been found, but gap junction interneurons interweave the entire cortex. 14. Electrotonic gap junctions synchronize neurons. -Gap junction interneurons do appear to mediate gamma synchrony EEG [49?54]. 15. Quantum tunneling occurs across gap junctions. -As yet untested. 16. Quantum correlations between microtubules in different neurons occurs via gap junctions. -As yet untested. However Bandyopadhyay has preliminary evidence that spatially separated MTs, perhaps even in different neurons, become entangled in terms of their BC resonances [142], so gap junctions may be unnecessary for Orch OR. The amount of neural tissue involved in a conscious event is inversely related to the event time by ? ? h/E G 17. Functional imaging and electrophysiology will show perception and response time shorter with more neural mass involved. -As a ?prediction? of Orch OR, the status of this is not very clear; moreover it is very hard to provide any clear estimate of the neural mass that is involved in a ?perception?. As a related issue, there does appear to be evidence for some kind of scale-invariance in neurophysiological processes (Section 3.2 [76,77]). An unperturbed isolated quantum state self-collapses (OR) according to ? ? h/E G 18. Technological quantum superpositions will be shown to undergo OR by ? ? h/E G . -Various experiments are being developed which should supply an answer to this fundamental question [108], but they appear to remain several years away from being able to achieve firm conclusions. Microtubule-based cilia/centrioles are quantum optical devices 19. Microtubule-based cilia in retinal rod and cone cells detect photon quantum information. This appears to be untested, so far. A critical degree of microtubule activity enabled consciousness during evolution 20. Fossils will show organisms from early Cambrian (540 million years ago), had sufficient microtubule capacity for OR by ? ? ?/E G of less than a minute, perhaps resulting in rudimentary Orch OR, consciousness and the ?Cambrian evolutionary explosion?. -It is clearly hard to know an answer to this one, particularly because the level of consciousness in extinct creatures would be almost impossible to determine. However present day organisms looking remarkably like early Cambrian creatures (actinosphaerum, nematodes) are known to have over 10^9 tubulins [56]. It would appear that the expectations of Orch OR have fared rather well so far, and it gives us a viable scientific proposal aimed at providing an understanding of the phenomenon of consciousness. We believe that the underlying scheme of Orch OR has a good chance of being basically correct in its fundamental conceptions.? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 3 01:20:14 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 3 Feb 2015 12:20:14 +1100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: On Saturday, January 31, 2015, John Clark wrote: > > On Fri, Jan 30, 2015 at 1:56 AM, Keith Henson > wrote: > >> >> >> Some catastrophe hits a civilization when it gets a little past >>> our level; my best guess would be the electronic equivalent of drug abuse. >> >> >> > Possible. But it seems an unlikely filter to get all >> possible variations on a nervous system if ET's with the capacity to affect >> the visible state of the universe are common. I suspect you need something >> fundamental that keeps every single one of them from spreading out. >> > > But that's exactly my fear, it may be fundamental. If they can change > anything in the universe then they can change the very thing that makes the > changes, themselves. There may be something about intelligence and positive > feedback loops (like having full control of your emotional control panel) > that always leads to stagnation. After all, regardless of how well our life > is going who among us would for eternity opt out of becoming just a little > bit happier if all it took was turning a knob? And after you turn it a > little bit and see how much better you feel why not turn it again, perhaps > a little more this time. > > The above may be pure nonsense, I sure hope so. > You could always turn the knob that makes you seek out hard work and good deeds and makes you avoid the mindless happiness knob. Having said that, however, I can think of worse things that eternal bliss. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 3 03:10:43 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 3 Feb 2015 14:10:43 +1100 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54CF7041.4090904@canonizer.com> References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> <54CF7041.4090904@canonizer.com> Message-ID: On 2 February 2015 at 23:40, Brent Allsop wrote: > > Hi Stathis, > > > On 2/1/2015 10:36 PM, Stathis Papaioannou wrote: >> >> It could be that CMOS sensors have qualia but you can't access them by >> interfacing with the brain, since the two systems are radically different. >> Conversely, if you stick a CMOS in your brain and experience different >> qualia that could just be due to the disruption of normal brain activity, >> and not evidence that the CMOS in a digital camera has qualia. > > > I completely agree, yes. But you are missing the more important point which > is, if this theory can be falsified, as you are doing, you simply need to > come up with a variation on the theory, till you get one the experimental > science effingly proves is the one. It has a large part to do with both > what has the quality, and how these qualities interact with the binding > mechanism. It could be that only neurons and, and neurotransmitters can > have and bind together qualitative properties. And that we will never find > a way to integrate stuff like CMOS, qualitatively, directly, as you predict. > But, again, what matters, is the general framework where you can determine > what does, and what does not have qualia, and it is not different than the > rest of science, as long as you include this qualitative information that is > intrinsic, and not just zombie information being interpreted as if it was > the real thing. But in general it is impossible to devise an experiment that will test the hypothesis that something has qualia. >> You seem to forget that, whatever neurons or neuronal components might >> experience, they can only communicate with downstream neurons by the >> timing of their synaptic firing. If the timing of synaptic firing is >> unchanged, the downstream motor neurons' firings will be unchanged and >> the behaviour of the organism will be unchanged. And if the functional >> replacement for glutamate (or whatever other component) does not alter >> the sequence and firing of the neuron - which must be the case if it >> is "functionally equivalent" - then the behaviour of the organism will >> be unchanged. So the "that is real redness behaviour" will happen >> provided only that the replacement part is functionally equivalent. > > > But you are leaving out the binding system. The only way what you are > claiming could be true is if there is no theoretically possible way to do > what you are assuming can't be done. But there are many theoretical > possibilities which could bind, effingly, multiple qualities, so you are > aware of redness and greenness, at the same time, and that you can know the > difference because of this qualitative binding. The neural wave theories > that Steven Lehar talks about are just one example of many possible theories > that can bind together waves, over areas of time and space, which falsifies > what you are claiming, that you cannot bind together things like this, to be > aware of both of them, qualitatively, at the same time. I do not claim that it is impossible to be bind different qualia simultaneously. What I am claiming is that if you replace a part of the brain with another part that functions identically, in the sense of reproducing its I/O behaviour, then all of the behaviour and all of the experiences will be unchanged. Therefore qualia cannot be due to particular matter, they must be due to the functional organisation of the brain. You seem to be agreeing with this because you say that if we replace a part of the brain and then test the subject by asking him about his qualia, and find no difference, then that is evidence that the replacement part has similar qualia to the original part. Have I got it right or do you disagree with this? >> It is the function of the part, not the matter it is made out of, that >> determines this. If you agree that, given identical function, then the >> qualia must also be identical, then you are a functionalist. > > > Again, I completely agree with this. But the higher level testability, > qualitatively, still applies. When you are aware of redness, there must be > something, that is detectably causing you to be aware of this. And there > must be something that is detectably different than this, which is reliably > responsible for us being aware of greenness. Yes. > Since no functionalist, ever gives any possible way to reliably detect this > difference, even if it is an obviously falsifiable difference, it makes it > hard to use functionalist theories to talk about the more important part of > how to falsifiably detect what is, and is not responsible for redness. All > you need to do is replace glutamate, with whatever you could possibly > predict is detectably responsible for redness. A functionalist would say that if you knock out the glutamate and the red qualia disappear then the glutamate was necessary (though perhaps not sufficient) to produce red qualia; and replace the glutamate with a functional equivalent and the red qualia will be preserved. > I've tried to find any possible way to detect, functionally, what is > responsible for redness, and how this could be different than some > functionalist greenness, but I can't, without it being circular, and not > defined anywhere. If you can provide any way to test for this, I will be > glad to include the predictions you provide, along with the different > predictions being included with materialist theories, so we can both leave > it to scientific demonstration to prove, which is right. But till you can > provide how to test for what you are claiming, there isn't much I can do. > Seems to me you must admit that if a materialist theory, works, and you can > use waves, or anything else to bind them together, as you predict can't be > done, you must admit that functionality would be scientifically proven > wrong. > > And the same is true for detectable functionalism. But, we must first find, > either theoretically or experimentally, some reasonable testable theory > which is detectably responsible for us experiencing a redness quality. I must be missing something because it seems straightforward to me: knock out a part of the system you suspect is involved in a certain type of experience and that experience should be affected. -- Stathis Papaioannou From spike66 at att.net Tue Feb 3 03:01:32 2015 From: spike66 at att.net (spike) Date: Mon, 2 Feb 2015 19:01:32 -0800 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: <055001d03f5d$b7e019d0$27a04d70$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stathis Papaioannou Sent: Monday, February 02, 2015 5:20 PM To: ExI chat list Subject: Re: [ExI] taxonomy for fermi paradox fans: On Saturday, January 31, 2015, John Clark wrote: >>? And after you turn it a little bit and see how much better you feel why not turn it again, perhaps a little more this time? John >?You could always turn the knob that makes you seek out hard work and good deeds and makes you avoid the mindless happiness knob. Having said that, however, I can think of worse things that eternal bliss. Stathis Papaioannou All of Religion Inc, every brand and every variation, is all about eternal bliss. As far as I know, none of it carries any notion of long-term actually accomplishing anything. It is all about making sure the flock keeps dropping money into the offering plate in exchange for assurances they will achieve the eternal bliss. I am reluctant to admit it, but this supports John?s notion of stagnation being universal as soon as any intelligent species achieves eternal bliss. Gaaaaahd Daaaaaaam! {8-[ Hey, there?s an idea: invent a religion with a couple of cool variations on the usual tired old theme. In this one, you aren?t required to believe in it. Belief isn?t an inherent virtue; only actual virtue is an inherent virtue. Imagine that. Second, the eternal bliss isn?t perfect and it is not perfectly pointless: there is work to be done and things to build. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 3 03:34:10 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 2 Feb 2015 22:34:10 -0500 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: On Mon, Feb 2, 2015 Stathis Papaioannou wrote: . > >You could always turn the knob that makes you seek out hard work and good > deeds and makes you avoid the mindless happiness knob. > Hard work by itself could be as useless as the happiness knob, you could end up digging holes and then immediately filling them up again. Your hard work needs to be part of achieving something important, but achieving important things is rare and so your related pleasure reward would be rare too, no match for just turning the old happiness knob. > and good deeds It's the same problem. Curing cancer would be a good deed but its hard and doing something that great is rare. On the other hand performing bad deeds is easy and common, and I hate to think where that might lead. But maybe there is a way out.It's a cliche to say you should know yourself but that could be the fundamental problem, if you don't understand yourself really well you won't have access to your emotional control panel. So maybe in a population of Asteroid Brains I would try to understand you so well enough that I could upgrade you and make you smarter, and you would try to understand me so well that you could upgrade me and make me smarter. That way neither of us understands ourselves well enough to gain access to our own emotional control panel and gets caught in a positive feedback loop. > > Having said that, however, I can think of worse things that eternal > bliss. > Well yeah, it beats getting poked in the eye with a sharp stick. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 3 04:04:59 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 2 Feb 2015 23:04:59 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> <54CF00E6.4020802@canonizer.com> Message-ID: On Mon, Feb 2, 2015 Brent Allsop wrote: >> Humans can see about 10 million colors, beyond that 2 colors are too >> similar for us to tell apart. If a transhuman wished to have better color >> discrimination he would need to see more than 10 million colors, and so >> obviously he would subjectively experience more colors than we do. >> > > > You forgot to include: "and yes, once I experience these new colors, as > Brent's theory predicts, much of my thinking would be falsified. > How do figure that? I never claimed that the amount of color of color discrimination that humans have is as good as it could get, and if something can discriminate between more that 10 million colors then obviously they need more labels than we do, that is to say they can see colors that we can not. I have absolutely no problem with that. > > Including my surprize that I can experience this new blue in a dark room > with no light at all. > I always knew that if you sat in a dark room and closed you eyes and applied pressure to your close eyelids you see colors. So what am I supposed to be surprised about? > >boy, it was a complete waste of time to ever talk about light, > And I said more than once that REDNESS and electromagnetic radiation with a 620 NM wavelength were NOT the same thing. And I'm not unusual, I don't think anybody believes they are the same, except perhaps for some men of straw. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 3 05:45:39 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 3 Feb 2015 16:45:39 +1100 Subject: [ExI] taxonomy for fermi paradox fans: In-Reply-To: References: Message-ID: On 3 February 2015 at 14:34, John Clark wrote: > On Mon, Feb 2, 2015 Stathis Papaioannou wrote: > . >> >> >You could always turn the knob that makes you seek out hard work and good >> > deeds and makes you avoid the mindless happiness knob. > > > Hard work by itself could be as useless as the happiness knob, you could end > up digging holes and then immediately filling them up again. Your hard work > needs to be part of achieving something important, but achieving important > things is rare and so your related pleasure reward would be rare too, no > match for just turning the old happiness knob. > >> > and good deeds > > > It's the same problem. Curing cancer would be a good deed but its hard and > doing something that great is rare. On the other hand performing bad deeds > is easy and common, and I hate to think where that might lead. Many people in our society actually do good deeds, work hard, exercise and so on where it would be easier and more immediately rewarding to sit in front of the TV eating potato chips and doing drugs. This is because they have the type of mind that makes them behave this way. Other people don't have this type of mind, but wish they did. In the future, when everyone can reconstruct their mind the way they want, it will be easier, not harder, to avoid the trap of the happiness knob. -- Stathis Papaioannou From foozler83 at gmail.com Tue Feb 3 15:38:32 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 3 Feb 2015 09:38:32 -0600 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54B8A105.9070208@canonizer.com> <54C5B82E.2030303@canonizer.com> <54CCF2AA.90300@canonizer.com> <54CD5714.4070307@canonizer.com> <54CEB2B7.70804@canonizer.com> <54CF00E6.4020802@canonizer.com> Message-ID: ? And I said more than once that REDNESS and electromagnetic radiation with a 620 NM wavelength were NOT the same thing. And I'm not unusual, I don't think anybody believes they are the same, except perhaps for some men of straw. John K Clark Clearly they are not. (Obviously not in the case of red/green color blindness). I was very surprised when I went to my audiologist to eliminate some noise in a hearing aid. I asked him why he didn't listen to it. He said that different people would hear different things through the aid. So if he listened to it he would not hear what I hear. Some interaction between aid and person. Hmmm. Bill W ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Wed Feb 4 04:56:48 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Tue, 03 Feb 2015 21:56:48 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> <54CF7041.4090904@canonizer.com> Message-ID: <54D1A690.4000101@canonizer.com> Hi Stathis, On 2/2/2015 8:10 PM, Stathis Papaioannou wrote: > But in general it is impossible to devise an experiment that will test > the hypothesis that something has qualia. But we describe in the paper a weak, stronger, and strongest form of effing the ineffable, which if validated is possible via experiment, it will falsify your assertion that this is impossible, right? Do you predict all of these are impossible, or just some parts of some? > What I am claiming is that if you replace a part of > the brain with another part that functions identically, in the sense > of reproducing its I/O behaviour, then all of the behaviour and all of > the experiences will be unchanged. Therefore qualia cannot be due to > particular matter, they must be due to the functional organisation of > the brain. You seem to be agreeing with this because you say that if > we replace a part of the brain and then test the subject by asking him > about his qualia, and find no difference, then that is evidence that > the replacement part has similar qualia to the original part. Have I > got it right or do you disagree with this? Let's back up a bit here. Do you not agree that the word 'red' is zombie information, and has nothing to do with redness, other than it has sufficient diversity for it to be able to be interpreted as if it did, and that only with that interpretation is it able to behave as if it was the real intrinsic redness? Why is the substation experiment not just replacing the real things, with zombie information that only functions the same, to the degree that you include the correct interpretation hardware which is capable of interpreting that which does not have a redness quality, as if it does? Sure they function the same, but one does not need interpretation hardware, and the other does. I don't understand why you can't clearely see the significance of this difference, whether something functional or material is responsible for redness. One has a true redness experience. By definition, the zombie simulation does not, yet it is perfectly able to function the same, but only because of the interpretation hardware. I've tried to explain this many many times, yet you still say behaving the same is "evidence that the replacement part has similar qualia to the original." even though one, by definition, only has zombie information which, by definition, does not have the original intrinsic redness quality. >>> It is the function of the part, not the matter it is made out of, that >>> determines this. If you agree that, given identical function, then the >>> qualia must also be identical, then you are a functionalist. >> >> Again, I completely agree with this. But the higher level testability, >> qualitatively, still applies. When you are aware of redness, there must be >> something, that is detectably causing you to be aware of this. And there >> must be something that is detectably different than this, which is reliably >> responsible for us being aware of greenness. > Yes. > >> Since no functionalist, ever gives any possible way to reliably detect this >> difference, even if it is an obviously falsifiable difference, it makes it >> hard to use functionalist theories to talk about the more important part of >> how to falsifiably detect what is, and is not responsible for redness. All >> you need to do is replace glutamate, with whatever you could possibly >> predict is detectably responsible for redness. > A functionalist would say that if you knock out the glutamate and the > red qualia disappear then the glutamate was necessary (though perhaps > not sufficient) to produce red qualia; and replace the glutamate with > a functional equivalent and the red qualia will be preserved. > >> I've tried to find any possible way to detect, functionally, what is >> responsible for redness, and how this could be different than some >> functionalist greenness, but I can't, without it being circular, and not >> defined anywhere. If you can provide any way to test for this, I will be >> glad to include the predictions you provide, along with the different >> predictions being included with materialist theories, so we can both leave >> it to scientific demonstration to prove, which is right. But till you can >> provide how to test for what you are claiming, there isn't much I can do. >> Seems to me you must admit that if a materialist theory, works, and you can >> use waves, or anything else to bind them together, as you predict can't be >> done, you must admit that functionality would be scientifically proven >> wrong. >> >> And the same is true for detectable functionalism. But, we must first find, >> either theoretically or experimentally, some reasonable testable theory >> which is detectably responsible for us experiencing a redness quality. > I must be missing something because it seems straightforward to me: > knock out a part of the system you suspect is involved in a certain > type of experience and that experience should be affected. > Yes, that is exactly what I am saying will enable us to detect and predict when someone is, and is not experiencing redness. But if you know this is possible, I cant understand why are you saying this can't be done? Brent Allsop From pharos at gmail.com Wed Feb 4 17:08:33 2015 From: pharos at gmail.com (BillK) Date: Wed, 4 Feb 2015 17:08:33 +0000 Subject: [ExI] META: SOFTWARE: Google Earth Pro now free Message-ID: Google has announced that the Professional version of Google Earth is now available at no charge. Was 399 USD! Download link here: Google Earth Pro includes the same features and imagery as Google Earth, but with additional professional tools designed specifically for business users. Sign in using your email address and the License Key GEPFREE BillK From brent.allsop at canonizer.com Wed Feb 4 17:17:21 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Wed, 4 Feb 2015 10:17:21 -0700 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: <54D1A690.4000101@canonizer.com> References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> <54CF7041.4090904@canonizer.com> <54D1A690.4000101@canonizer.com> Message-ID: Hi Stathis, Had another thought that might help us communicate about when you said: On 2/2/2015 8:10 PM, Stathis Papaioannou wrote: > > What I am claiming is that if you replace a part of >> the brain with another part that functions identically, in the sense >> of reproducing its I/O behaviour, then all of the behaviour and all of >> the experiences will be unchanged. > > If you have a reliable system that detects real glutamate, and can never be fooled, you will not be able to reproduce the detection of real glutamate as reliably, just by reproducing the detector with some other simulated zombie I/O behavior, right? And once you replace the real glutamate, with simulated Zombie versions that aren't anything like the real thing, then this is clearly not the same, would you not agree? And you can clearly make the claim that there is no real glutamate being detected by the simulation? Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Feb 4 21:03:17 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 5 Feb 2015 08:03:17 +1100 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> <54CF7041.4090904@canonizer.com> <54D1A690.4000101@canonizer.com> Message-ID: On Thursday, February 5, 2015, Brent Allsop wrote: > > Hi Stathis, > > Had another thought that might help us communicate about when you said: > > On 2/2/2015 8:10 PM, Stathis Papaioannou wrote: >> >> What I am claiming is that if you replace a part of >>> the brain with another part that functions identically, in the sense >>> of reproducing its I/O behaviour, then all of the behaviour and all of >>> the experiences will be unchanged. >> >> > > If you have a reliable system that detects real glutamate, and can never > be fooled, you will not be able to reproduce the detection of real > glutamate as reliably, just by reproducing the detector with some other > simulated zombie I/O behavior, right? > > And once you replace the real glutamate, with simulated Zombie versions > that aren't anything like the real thing, then this is clearly not the > same, would you not agree? And you can clearly make the claim that there > is no real glutamate being detected by the simulation? > Glutamate has a particular function in the brain, which is to be released into the synaptic cleft when an appropriate signal arrives down the axon and attach to glutamate receptors, causing a change in their conformation and thereby triggering a series of events in the postsynaptic neuron. You could imagine replacing all the glutamate in the brain with nanodemons that lie in wait at the synapse and on the appropriate signal jump out, grab the glutamate receptors with their little arms, and squeeze them into the same shape glutamate would. Assume that glutamate is implicated in experiencing redness. These demons are clearly very different from glutamate, but the subject who has his glutamate replaced with the demons will say, when you ask him, that he still sees red, and will be able to correctly identify red. Do you see that this must be so? Do you believe that the subject really does see red, the same as before the replacement? How do you explain this, if all the glutamate has gone? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Feb 4 22:06:23 2015 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 4 Feb 2015 17:06:23 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> <54CF7041.4090904@canonizer.com> <54D1A690.4000101@canonizer.com> Message-ID: On Wed, Feb 4, 2015 at 4:03 PM, Stathis Papaioannou wrote: > Glutamate has a particular function in the brain, which is to be released > into the synaptic cleft when an appropriate signal arrives down the axon and > attach to glutamate receptors, causing a change in their conformation and > thereby triggering a series of events in the postsynaptic neuron. You could > imagine replacing all the glutamate in the brain with nanodemons that lie in > wait at the synapse and on the appropriate signal jump out, grab the > glutamate receptors with their little arms, and squeeze them into the same > shape glutamate would. Assume that glutamate is implicated in experiencing > redness. These demons are clearly very different from glutamate, but the > subject who has his glutamate replaced with the demons will say, when you > ask him, that he still sees red, and will be able to correctly identify red. > Do you see that this must be so? Do you believe that the subject really does > see red, the same as before the replacement? How do you explain this, if all > the glutamate has gone? If you play middle-C on a piano and you have perfect pitch you say "ah.. middle C" If you play middle-C through a speaker and you have perfect pitch you say "ah.. middle C" Does it matter what produces the 261.625565 hertz air vibrations? Replace the detector (audio) with an oscilloscope (sight) ... You see the screen and say "ah.. middle C" I'm not really sure what is ineffable anymore. "Audible middle C" vs "Visible middle C" ? is the "ineffable" part just the bit of distinguishing details below the threshold of relevance? Pi is 3.14; no it isn't: Pi is 3.14159; no it isn't: Pi is the ratio of circumference to diameter of a circle; ... Are these three descriptions of Pi somehow 'zombie information' regarding the True Pi Quale? How much double-plus adjective does it take to ever resolve the defined-as-ineffable? 1/0 ? idk. :( From johnkclark at gmail.com Thu Feb 5 01:03:15 2015 From: johnkclark at gmail.com (John Clark) Date: Wed, 4 Feb 2015 20:03:15 -0500 Subject: [ExI] Fwd: Paper on "Detecting Qualia" presentation at 2015 MTA conference In-Reply-To: References: <54CCC601.2000906@canonizer.com> <54CD4BF4.9050409@canonizer.com> <54CEBA41.6020300@canonizer.com> <54CEFA96.4010709@canonizer.com> <54CF7041.4090904@canonizer.com> <54D1A690.4000101@canonizer.com> Message-ID: On Wed, Feb 4, 2015 Brent Allsop wrote: > > > once you replace the real glutamate, with simulated Zombie versions that > aren't anything like the real thing, then this is clearly not the same, > would you not agree? > I don't know if I agree or not because I don't know what Zombie glutamate is. But I do know that I am conscious ; I also know that you are clearly not the same as me. So should I assume that you're conscious too, or are the differences between us too great for me to make such a bold assumption? John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Feb 6 19:17:23 2015 From: spike66 at att.net (spike) Date: Fri, 6 Feb 2015 11:17:23 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads Message-ID: <001201d04241$8a095800$9e1c0800$@att.net> This story claims they can get 100 pound payloads to LEO for a million bucks everything included. If so, that would put these payloads in the range of universities, smaller companies, well-funded amateur groups, solar power experiments and so forth: http://video.foxnews.com/v/4036325757001/watch-how-darpa-plans-to-launch-sat ellites-into-space/?#sp=show-clips That plane they show as an artist concept is neither F-4, F-14 or FA-18, but has elements of all three. It would be interesting to look at taking an F-4, which can be had for practically free, retrofitting it with lighter wings and landing gear, get rid of all the war junk, the tailhook and all that heavy stuff, look at a rail launch system so that the new landing gear need not support the weight and physical envelope of the payload, see what takes shape. Perhaps we could do a rail-takeoff, air-breathing climb-out to about 40k-ft and 500-ish knots (subsonic) at 45 degrees to horizon, drop the payload, runway landing, fuel-up and turn around in a couple hours. That would be a fun optimization problem. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 8 22:31:22 2015 From: spike66 at att.net (spike) Date: Sun, 8 Feb 2015 14:31:22 -0800 Subject: [ExI] most common jobs Message-ID: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Now this is a cool and thought provoking site. What happens when robo-trucks start to take over the job of truck driver? http://www.npr.org/blogs/money/2015/02/05/382664837/map-the-most-common-job- in-every-state?utm_source=npr_newsletter &utm_medium=email&utm_content=20150208&utm_campaign=mostemailed&utm_term=npr news spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Feb 8 23:40:51 2015 From: pharos at gmail.com (BillK) Date: Sun, 8 Feb 2015 23:40:51 +0000 Subject: [ExI] most common jobs In-Reply-To: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: On 8 February 2015 at 22:31, spike wrote: > What happens when robo-trucks start to take over the job of truck driver? > That question is beginning to get more attention. What will the developed countries do with all the low skilled people? Soon they won't even be required to fight in wars. The robots *are* coming for their jobs. I can see the 0.1% rich living in gated estates and on tropical islands while there are cities of people living near subsistence level. The guard robots will protect the rich. And the rich may feel that paying a small tax (as a proportion of their huge income) to support the rest of the population is a justified expense. (To keep the people quiet). If entertainment, drugs and social activities are available with sufficient basic income to live on, the people probably could be mostly happy with that deal. And the poor cities can run their own basic services like police, hospitals, etc. to provide minimal services. Might turn out OK. BillK From stathisp at gmail.com Mon Feb 9 00:00:06 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 9 Feb 2015 11:00:06 +1100 Subject: [ExI] most common jobs In-Reply-To: References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: On 9 February 2015 at 10:40, BillK wrote: > On 8 February 2015 at 22:31, spike wrote: >> What happens when robo-trucks start to take over the job of truck driver? >> > > > That question is beginning to get more attention. > What will the developed countries do with all the low skilled people? > Soon they won't even be required to fight in wars. > > The robots *are* coming for their jobs. > > I can see the 0.1% rich living in gated estates and on tropical > islands while there are cities of people living near subsistence > level. The guard robots will protect the rich. And the rich may feel > that paying a small tax (as a proportion of their huge income) to > support the rest of the population is a justified expense. (To keep > the people quiet). > > If entertainment, drugs and social activities are available with > sufficient basic income to live on, the people probably could be > mostly happy with that deal. And the poor cities can run their own > basic services like police, hospitals, etc. to provide minimal > services. > > Might turn out OK. > > BillK High levels of automation and unemployment do not seem to be correlated at all as far as I can tell. -- Stathis Papaioannou From johnkclark at gmail.com Mon Feb 9 01:52:36 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 8 Feb 2015 20:52:36 -0500 Subject: [ExI] most common jobs In-Reply-To: References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: On Sun, Feb 8, 2015 Stathis Papaioannou wrote: > > High levels of automation and unemployment do not seem to be > correlated at all as far as I can tell. > But it's only a matter of time till they were, and probably not a great deal of time John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From crw at crw.io Mon Feb 9 01:37:29 2015 From: crw at crw.io (crw) Date: Sun, 08 Feb 2015 17:37:29 -0800 Subject: [ExI] most common jobs In-Reply-To: References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: <54D80F59.7070408@crw.io> On 2/8/2015 4:00 PM, Stathis Papaioannou wrote: > On 9 February 2015 at 10:40, BillK wrote: >> On 8 February 2015 at 22:31, spike wrote: >>> What happens when robo-trucks start to take over the job of truck driver? >>> >> The robots *are* coming for their jobs. >> >> ... >> >> Might turn out OK. >> >> BillK > High levels of automation and unemployment do not seem to be > correlated at all as far as I can tell. > > Coincidentally, I just happened across this relevant post: A Tale of Two Zippers http://www.bunniestudios.com/blog/?p=4364 "They?ve already automated everything else in this factory, so I figure they?ve thought long and hard about this problem, too. My guess is that robots are expensive to build and maintain; people are self-replicating and largely self-maintaining. Remember that third input to the factory, ?rice?? Any robot?s spare parts have to be cheaper than rice to earn a place on this factory?s floor." Food for thought? :) -crw From stathisp at gmail.com Mon Feb 9 03:46:45 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 9 Feb 2015 14:46:45 +1100 Subject: [ExI] most common jobs In-Reply-To: References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: On 9 February 2015 at 12:52, John Clark wrote: > On Sun, Feb 8, 2015 Stathis Papaioannou wrote: > > >> > High levels of automation and unemployment do not seem to be >> correlated at all as far as I can tell. > > > But it's only a matter of time till they were, and probably not a great deal > of time > > John K Clark If you looked at the jobs people did a few centuries ago and consider what effect automation has had on them, you might conclude that almost everyone today should be unemployed. But that is not in fact what has happened. -- Stathis Papaioannou From kellycoinguy at gmail.com Mon Feb 9 06:40:27 2015 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 8 Feb 2015 23:40:27 -0700 Subject: [ExI] most common jobs In-Reply-To: <54D80F59.7070408@crw.io> References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> <54D80F59.7070408@crw.io> Message-ID: On Sun, Feb 8, 2015 at 6:37 PM, crw wrote: > > On 2/8/2015 4:00 PM, Stathis Papaioannou wrote: > >> On 9 February 2015 at 10:40, BillK wrote: >> >>> On 8 February 2015 at 22:31, spike wrote: >>> >>>> What happens when robo-trucks start to take over the job of truck >>>> driver? >>> >>> Collecting unemployment insurance will then become a growth industry. > >>>> A Tale of Two Zippers > http://www.bunniestudios.com/blog/?p=4364 > > "They?ve already automated everything else in this factory, so I figure > they?ve thought long and hard about this problem, too. My guess is that > robots are expensive to build and maintain; people are self-replicating and > largely self-maintaining. Remember that third input to the factory, ?rice?? > Any robot?s spare parts have to be cheaper than rice to earn a place on > this factory?s floor." > > Food for thought? :) Seems that this is an easy problem to fix with a computer, a camera and some OpenCV code written by a poor Romanian programmer. Maybe even less if you speak enough Chinese to hire one of them. I could fix that problem for less than $1000. My guess is the guy feeding the zippers into the machine is a relative of the person who owns the factory. This is how many things work in China. It even has a special name though I can't remember it now well enough to look it up. -Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Mon Feb 9 07:56:48 2015 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 9 Feb 2015 00:56:48 -0700 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: <001201d04241$8a095800$9e1c0800$@att.net> References: <001201d04241$8a095800$9e1c0800$@att.net> Message-ID: Looks like a rip off of Virgin Galactic's basic concept. My question is why would this cost a million dollars?? -Kelly On Fri, Feb 6, 2015 at 12:17 PM, spike wrote: > > > This story claims they can get 100 pound payloads to LEO for a million > bucks everything included. If so, that would put these payloads in the > range of universities, smaller companies, well-funded amateur groups, solar > power experiments and so forth: > > > > > http://video.foxnews.com/v/4036325757001/watch-how-darpa-plans-to-launch-satellites-into-space/?#sp=show-clips > > > > That plane they show as an artist concept is neither F-4, F-14 or FA-18, > but has elements of all three. It would be interesting to look at taking > an F-4, which can be had for practically free, retrofitting it with lighter > wings and landing gear, get rid of all the war junk, the tailhook and all > that heavy stuff, look at a rail launch system so that the new landing gear > need not support the weight and physical envelope of the payload, see what > takes shape. > > > > Perhaps we could do a rail-takeoff, air-breathing climb-out to about > 40k-ft and 500-ish knots (subsonic) at 45 degrees to horizon, drop the > payload, runway landing, fuel-up and turn around in a couple hours. That > would be a fun optimization problem. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markalanwalker at gmail.com Mon Feb 9 13:07:54 2015 From: markalanwalker at gmail.com (Mark Walker) Date: Mon, 9 Feb 2015 06:07:54 -0700 Subject: [ExI] most common jobs In-Reply-To: References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: This is the usual objection of economists. I side with Chicken Little in thinking the employment sky will fall: http://jetpress.org/v24/walker.htm. Dr. Mark Walker Richard L. Hedden Chair of Advanced Philosophical Studies Department of Philosophy New Mexico State University P.O. Box 30001, MSC 3B Las Cruces, NM 88003-8001 USA http://www.nmsu.edu/~philos/mark-walkers-home-page.html On Sun, Feb 8, 2015 at 8:46 PM, Stathis Papaioannou wrote: > On 9 February 2015 at 12:52, John Clark wrote: > > On Sun, Feb 8, 2015 Stathis Papaioannou wrote: > > > > > >> > High levels of automation and unemployment do not seem to be > >> correlated at all as far as I can tell. > > > > > > But it's only a matter of time till they were, and probably not a great > deal > > of time > > > > John K Clark > > If you looked at the jobs people did a few centuries ago and consider > what effect automation has had on them, you might conclude that almost > everyone today should be unemployed. But that is not in fact what has > happened. > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 9 15:35:02 2015 From: spike66 at att.net (spike) Date: Mon, 9 Feb 2015 07:35:02 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> Message-ID: <014701d0447d$f9743e10$ec5cba30$@att.net> http://video.foxnews.com/v/4036325757001/watch-how-darpa-plans-to-launch-satellites-into-space/?#sp=show-clips From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Sunday, February 08, 2015 11:57 PM To: ExI chat list Subject: Re: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads >?Looks like a rip off of Virgin Galactic's basic concept. My question is why would this cost a million dollars?? -Kelly I can think that a lot of the cost of this would be in the inertial reference gear in the guidance system. For typical satellite work, that stuff doesn?t come cheap. But given some kind of standard bus, there would be mass production, thousands of units perhaps. I can imagine a standardized thousand unit production run of guidance and control systems including thrust vector control getting down below a million clams, and everything else combined below 100k. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 9 15:36:53 2015 From: spike66 at att.net (spike) Date: Mon, 9 Feb 2015 07:36:53 -0800 Subject: [ExI] most common jobs In-Reply-To: References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: <014c01d0447e$3b8822d0$b2986870$@att.net> >?If you looked at the jobs people did a few centuries ago and consider what effect automation has had on them, you might conclude that almost everyone today should be unemployed. But that is not in fact what has happened. -- Stathis Papaioannou _______________________________________________ By the standards of people from a few centuries ago, almost everyone today is unemployed. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Mon Feb 9 18:25:09 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Mon, 9 Feb 2015 12:25:09 -0600 Subject: [ExI] most common jobs In-Reply-To: <014c01d0447e$3b8822d0$b2986870$@att.net> References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> <014c01d0447e$3b8822d0$b2986870$@att.net> Message-ID: On Mon, Feb 9, 2015 at 9:36 AM, spike wrote: > >?If you looked at the jobs people did a few centuries ago and consider > what effect automation has had on them, you might conclude that almost > everyone today should be unemployed. But that is not in fact what has > happened. -- Stathis Papaioannou > _______________________________________________ > > > > By the standards of people from a few centuries ago, almost everyone today > is unemployed. > > > > spike > ?I dunno but a lot of people around here would be more than happy to have some cotton to chop and pick and some corn to hoe? ?. bill w? > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Feb 9 18:42:55 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 9 Feb 2015 13:42:55 -0500 Subject: [ExI] most common jobs In-Reply-To: References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: On Sun, Feb 8, 2015 Stathis Papaioannou wrote: >>> High levels of automation and unemployment do not seem to be correlated > at all as far as I can tell. > >> But it's only a matter of time till they were, and probably not a great > deal of time > > If you looked at the jobs people did a few centuries ago and consider > what effect automation has had on them, you might conclude that almost > everyone today should be unemployed. But that is not in fact what has > happened. That's because there are still a number of tasks that machines still aren't very good at, but as time progresses the number of such tasks will Inevitability decrease. And today in developed countries people only work 40 hours a week (35 in France), and they get several weeks payed vacation a year (at least 5 in France); a few centuries ago that would have been considered largely unemployed. John k Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From outlawpoet at gmail.com Mon Feb 9 19:13:01 2015 From: outlawpoet at gmail.com (justin corwin) Date: Mon, 9 Feb 2015 11:13:01 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: <014701d0447d$f9743e10$ec5cba30$@att.net> References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: I don't think I've ever seen a setup with engines mounted so high on the fuselage, is there an engine/reference design that could be used for that? The artist seemed to have some very specific details one wouldn't just assume. I like the idea of establishing some numbers. I know that BAE in Mojave makes(or made) drones out of F-4 Phantoms to be used as targets for missile tests. Any chance somebody knows what they pay for those? You'd want it unmanned anyway. At that weight, you're talking about a ring of 6 P-POD launchers (or a similarly sized single payload), which is a good amount of money to fund it. Cubesat launch slots have been getting more expensive lately. When I first started looking they were in the 70k range, but recent costs are over 100k. 6 pods is 18U of space, which puts your payload at almost 2 million, almost enough to underwrite a second attempt if DARPA's numbers are anywhere near right. If there were a dedicated small payload rocket, I think more cubesats might get made, but assuming there aren't, you have a market of about two successful launches a year of an 18U carrier. If you could kick your prices down, more might occur. Also, currently small payloads often have to wait, which mean timely payloads need dedicated launchers. If you could guarantee a launch within six months, you might attract more business as well. On Mon, Feb 9, 2015 at 7:35 AM, spike wrote: > > > > > > > > http://video.foxnews.com/v/4036325757001/watch-how-darpa-plans-to-launch-satellites-into-space/?#sp=show-clips > > > > > > > > *From:* extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] *On > Behalf Of *Kelly Anderson > *Sent:* Sunday, February 08, 2015 11:57 PM > *To:* ExI chat list > *Subject:* Re: [ExI] darpa's notion of using a retrofitted fighter jet to > launch payloads > > > > >?Looks like a rip off of Virgin Galactic's basic concept. My question is > why would this cost a million dollars?? -Kelly > > > > > > I can think that a lot of the cost of this would be in the inertial > reference gear in the guidance system. For typical satellite work, that > stuff doesn?t come cheap. But given some kind of standard bus, there would > be mass production, thousands of units perhaps. I can imagine a > standardized thousand unit production run of guidance and control systems > including thrust vector control getting down below a million clams, and > everything else combined below 100k. > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Justin Corwin outlawpoet at gmail.com http://programmaticconquest.tumblr.com http://outlawpoet.tumblr.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Feb 9 20:10:07 2015 From: pharos at gmail.com (BillK) Date: Mon, 9 Feb 2015 20:10:07 +0000 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: On 9 February 2015 at 19:13, justin corwin wrote: > I don't think I've ever seen a setup with engines mounted so high on the > fuselage, is there an engine/reference design that could be used for that? > The artist seemed to have some very specific details one wouldn't just > assume. > It is a new design for small rockets launched from aircraft. There is a good review here: Quote: Boeing plans to take a unique approach with the ALASA launch vehicle that is also intended to lower complexity and thus costs. The rocket will be powered by a monopropellant: a combination of nitrous oxide and acetelyene, mixed together in the same propellant tank and "slightly chilled" below room temperature, Clapp said. That propellant choice offers simplicity as well as a specific impluse "not far off" from LOX and RP-1. "That's kind of a big deal," he said. "In general, it's a dramatic simplification of the complexity of a rocket vehicle." The rocket's design is also unusual, mounting four engines just below the payload on the vehicle. The engines are used for the first and second stages of the rocket, with propellant tanks below the engines dropping away when exhausted. This approach avoids the expense and complexity of separate sets of engines for the first two stages. ------------- BillK From spike66 at att.net Mon Feb 9 21:12:32 2015 From: spike66 at att.net (spike) Date: Mon, 9 Feb 2015 13:12:32 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: <041c01d044ad$1f521b50$5df651f0$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of justin corwin Sent: Monday, February 09, 2015 11:13 AM To: ExI chat list Subject: Re: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads >?I don't think I've ever seen a setup with engines mounted so high on the fuselage, is there an engine/reference design that could be used for that? The artist seemed to have some very specific details one wouldn't just assume? The nozzles-forward design has some advantages that are compelling in the case where the craft doesn?t go supersonic until you get way up into the thin air. The usual nozzles-aft arrangement is better at high speeds low in the atmosphere, but in this case, nozzles-forward makes sense. Reasoning: you have air-breathing relatively low speed launch, then get up to perhaps 0.2 atmospheres or less before you go supersonic and have all the shock wave headaches to spoil your day. With the air-breathing stage you get out of most of the aerodynamic drag penalty of nozzles-forward. You might even get away with control by throttling as opposed to gimbal nozzles, which is compelling if you are trying for minimal cost. They don?t react as quickly as thrust vector control, but that might be OK for this application. Furthermore, with the nozzles forward design, you have the oscillating instability problem (that design puts two poles in the right half plane) but if you get tricky with your lead/lag control system, I can envision something like that working. If so, that is perhaps your best bet for a low cost high production run throwaway first stage. Terminology: an aircraft launch often refers to the airplane as a half stage, so the first stage is your first non-air breathing stage. >?I like the idea of establishing some numbers. I know that BAE in Mojave makes(or made) drones out of F-4 Phantoms to be used as targets for missile tests. Any chance somebody knows what they pay for those? You'd want it unmanned anyway? Ja! My first engineering job in 1983 was documenting the F-4 to drone conversion process, making a parts list, all the stuff they were doing back then. Good chance you could get the USNavy to give you one of those. An F-4 might be a really good choice for this application, since they didn?t really know all the forces they were dealing with back in the 1950s, so they built it strong, at the expense of weight (those are heavy bastards, but sturdy, and those engines are marvelous beasts.) What I don?t know is if the F-4 could handle the center-of-pressure offset with those negative dihedral tail design, however? I know for a fact the early prototypes of the F-4 had a positive dihedral tail and a few of those still exist down in the Mojave boneyard. Retrofit an F-4 with one of those A-version tails, take off the tailhook, retrofit those crazy-big landing gear with much smaller, lighter gear that can only handle a long flat runway landing on a calm day with tanks empty, then launch the thing with a railcar, getting up to near 300 knots on the deck. That would be a fun project to design. Anyone here have buddies at DARPA? >...If there were a dedicated small payload rocket, I think more cubesats might get made, but assuming there aren't, you have a market of about two successful launches a year of an 18U carrier. If you could kick your prices down, more might occur. Also, currently small payloads often have to wait, which mean timely payloads need dedicated launchers. If you could guarantee a launch within six months, you might attract more business as well? outlawpoet Ja I think there is a market there, not a huge one. But if DARPA has a practical nozzles-forward control system I can see getting the Navy to practically give you a Phantom, and there are still guys around who know how to get those engines running. It occurred to me that we could even recover the control section if you did it right. One of the advantages of a nozzles forward design is that you can drop off tanks from the aft end with the engines still firing, and perhaps mount the compressor turbines such that you get them back. Maybe. If you?re lucky. spike http://video.foxnews.com/v/4036325757001/watch-how-darpa-plans-to-launch-satellites-into-space/?#sp=show-clips From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Sunday, February 08, 2015 11:57 PM To: ExI chat list Subject: Re: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads >?Looks like a rip off of Virgin Galactic's basic concept. My question is why would this cost a million dollars?? -Kelly I can think that a lot of the cost of this would be in the inertial reference gear in the guidance system. For typical satellite work, that stuff doesn?t come cheap. But given some kind of standard bus, there would be mass production, thousands of units perhaps. I can imagine a standardized thousand unit production run of guidance and control systems including thrust vector control getting down below a million clams, and everything else combined below 100k. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -- Justin Corwin outlawpoet at gmail.com http://programmaticconquest.tumblr.com http://outlawpoet.tumblr.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 9 21:18:58 2015 From: spike66 at att.net (spike) Date: Mon, 9 Feb 2015 13:18:58 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: <043901d044ae$05893220$109b9660$@att.net> -----Original Message----- From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK Sent: Monday, February 09, 2015 12:10 PM To: ExI chat list Subject: Re: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads On 9 February 2015 at 19:13, justin corwin wrote: > I don't think I've ever seen a setup with engines mounted so high on > the fuselage, is there an engine/reference design that could be used for that? > The artist seemed to have some very specific details one wouldn't just > assume. > It is a new design for small rockets launched from aircraft. There is a good review here: Quote: >...Boeing plans to take a unique approach with the ALASA launch vehicle that is also intended to lower complexity and thus costs. The rocket will be powered by a monopropellant: a combination of nitrous oxide and acetelyene, mixed together in the same propellant tank and "slightly chilled" below room temperature, Clapp said. That propellant choice offers simplicity as well as a specific impluse "not far off" from LOX and RP-1. "That's kind of a big deal," he said. "In general, it's a dramatic simplification of the complexity of a rocket vehicle." >...The rocket's design is also unusual, mounting four engines just below the payload on the vehicle. The engines are used for the first and second stages of the rocket, with propellant tanks below the engines dropping away when exhausted. This approach avoids the expense and complexity of separate sets of engines for the first two stages. ------------- >...BillK _______________________________________________ Cool thanks BillK. I should have read this before I wrote the previous. The Booeing boys are thinking the same thing I did: lose some cost by using the same nozzles all the way up. If you go that route, your penalty for acetylene/nitrous really isn't all that much, because your nozzles are forced smaller anyway, so you under-expand the exhaust, so you wouldn't get full advantage of LOX/kerosene engines, and... you also get a small advantage in higher fuel/oxidizer density with acetylene/nitrous rockets. So this all makes sense to me and I welcome the day. Reason: government-funded rockets are always so very no-compromise on performance, or performance at the expense of everything, specifically, expense. Government-funded rockets are too expensive. We have long known that cost can come down if we accept lower performance. spike From anders at aleph.se Tue Feb 10 01:24:51 2015 From: anders at aleph.se (Anders Sandberg) Date: Tue, 10 Feb 2015 02:24:51 +0100 Subject: [ExI] most common jobs In-Reply-To: Message-ID: <1896103273-7837@secure.ericade.net> The issue is whether (1) machines can substitute a large set of jobs, and (2) there is no good way of inventing new jobs for humans. The industrial revolution did 1 to farming, but created lots of new occupations. The postindustrial service society has also invented a lot of jobs. I think the scary scenarios are (A) when a sudden jump in machine capability causes unemployment fast, while inventing jobs is far slower, and (B) when 2 holds because of machines being way better at nearly anything humans of some normalish capacity can do. There is also scenario (C), where automation grabs jobs belonging to a group that has a loud voice in society. In all cases, we should expect disruption. A and C are temporary, but also pretty likely. B is the tough one. But whether it can happen depends on whether it is really hard to find uses for people. I suspect that there is a lot of social and symbolic uses that would remain pretty safe (priests, politicians, prostitutes, prosecutors...) not because of skill, but by virtue of social roles. It might just be that a lot of jobs get "thinner": supported by machines overseen by the human, and quite possibly with way fewer people doing each occupation but a lot more such thin occupations. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Feb 10 21:38:50 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 10 Feb 2015 15:38:50 -0600 Subject: [ExI] musings on teaching Message-ID: Some tongue in cheek, some serious. You figure it out. Scenario 1 - Teacher has class read something, then everyone discusses everyone's interpretation of said something. Verdict - This is not teaching any more than reading a bible verse and discussing what everyone thinks of it is bible study. Scenario 2 - class reads something, then teacher politely listens to idiot's interpretations, then shoots them down one by one and offers the whole and complete truth, i.e., his opinion. Verdict - some teaching here but a lot of time wasted Scenario 3 - Class reads, teacher tells them what it means, wastes no time with idiots, embarrasses no one who hasn't read it, angers those who disagree into some actual thought, perhaps. Verdict - OK for undergraduates - won't get good evaluations Scenario 4 - Same as three but tells them nothing - assigns paper on reading to be corrected, preferably under the influence of something. When opinion is beaten out of him offers interpretation that mixes right-on with wildly improbable, student to sort out which is which.* Verdict - OK for grad school, some wasted time Scenario 5 - same as four, but class never meets. Reading list is put on the board, papers are collected at semester's end. Paper essentials are completely and unambiguously spelled out, cutting down on trips to the prof's office. Verdict - good for lazy teachers who really don't like to teach - grad school only Scenario 6 - same as four, but paper essentials are poorly and ambiguously spelled out so that success is totally a matter of prof's opinion. Students' panicky trips to prof's office give prof the intellectual stimulation he needs, and most teaching is done in his office. Verdict - The best. Ambiguity and anxiety keep enrollment down and attracts only good students. * In an intro course once I got bored and starting deviating from the book and from the principles etc. that I was supposed to be teaching. I got crazier and crazier for about 15 minutes before one very hesitant girl raised her hand and asked if I was really sure about that last thing. I said that for 15 minutes I had been shoveling bullshit. I told the very angry class that they had to be able to recognize bullshit as part of their essential education and they just had a taste of it. Still weren't too thrilled at being duped. Who is? Bill W -------------- next part -------------- An HTML attachment was scrubbed... URL: From robot at ultimax.com Wed Feb 11 00:22:20 2015 From: robot at ultimax.com (Robert G Kennedy III, PE) Date: Tue, 10 Feb 2015 19:22:20 -0500 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: Message-ID: <920eadbc55563d7d6cb1c05111731ac3@ultimax.com> As described and quoted from The Space Review, this is not a mono-propellant, it's a bi-propellant. Furthermore, the two parts (powerful oxidizer, explosive fuel, both of them gases!) are in the same container. I think there's a shorter word for that: bomb. Doesn't sound like a good idea. Did something get lost in translation? Jeff Foust has been in the space reporting biz a *long* time. This is not a new idea. Of course, everyone remembers the little Reagan-era ASAT launched from an F-15 (Homing Overlay Experiment?). However the concept was proved much further back than that, the USAF and the USN launched a variety of experimental ASATs from fighter aircraft way back in late 1950s. One of them out of China Lake IIRC was jokingly called "NOTS-NIK" ("NOT a SputNIK"). The NOTS actually stood for something real, but I forget what. RGK3 on Mon, 9 Feb 2015 20:10:07 +0000, BillK , said: > It is a new design for small rockets launched from aircraft. > There is a good review here: > > Quote: > Boeing plans to take a unique approach with the ALASA launch vehicle > that is also intended to lower complexity and thus costs. The rocket > will be powered by a monopropellant: a combination of nitrous oxide . ^^^^^^^^^^^^^^ ^^^^^^^^^^^^^ > and acetelyene, mixed together in the same propellant tank and . ^^^^^^^^^^ > "slightly chilled" below room temperature, Clapp said. That > propellant > choice offers simplicity as well as a specific impluse "not far off" > from LOX and RP-1. "That's kind of a big deal," he said. "In general, > it's a dramatic simplification of the complexity of a rocket > vehicle." > > The rocket's design is also unusual, mounting four engines just below > the payload on the vehicle. The engines are used for the first and > second stages of the rocket, with propellant tanks below the engines > dropping away when exhausted. This approach avoids the expense and > complexity of separate sets of engines for the first two stages. > ------------- -- Robert G Kennedy III, PE www.ultimax.com 1994 AAAS/ASME Congressional Fellow U.S. House Subcommittee on Space From spike66 at att.net Wed Feb 11 01:02:46 2015 From: spike66 at att.net (spike) Date: Tue, 10 Feb 2015 17:02:46 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: <920eadbc55563d7d6cb1c05111731ac3@ultimax.com> References: <920eadbc55563d7d6cb1c05111731ac3@ultimax.com> Message-ID: <024301d04596$73915660$5ab40320$@att.net> -----Original Message----- From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Robert G Kennedy III, PE Sent: Tuesday, February 10, 2015 4:22 PM To: extropy-chat at lists.extropy.org Subject: Re: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads >...As described and quoted from The Space Review, this is not a mono-propellant, it's a bi-propellant. >...Furthermore, the two parts (powerful oxidizer, explosive fuel, both of them gases!) are in the same container. >...I think there's a shorter word for that: bomb. >... The NOTS actually stood for something real, but I forget what. >...RGK3 Robert, NOTS is Naval Ordnance Test Station. The Pegasus is launched from an airplane. What I see are the contributions of this DARPA idea is a nozzles-forward design which allows the same nozzles to be used all the way up. Ja sounds like something was missed. In principle you can have two reactive gases in the same container. This happens thousands of times per second in your car engine. They are well mixed, under pressure and hot but don't combust until the spark plug starts the reaction. However in this case, Acetylene and nitric can't be mixed. The versions I have seen of that are hypergolic as all hell, however I can imagine them as liquids being mixed, then vaporized in the combustion chamber. Nitric oxide is a lot easier to keep liquid than is hydrogen and acetylene is easier to keep liquid than is LOX. Hey here's an idea: have the fuels carried aboard the aircraft in well-insulated tanks, carry the empty launcher way up above most of the water vapor, fuel the spacecraft in a single non-insulated tank 10 seconds before release, drop the thing and get outta Dodge before the motors start. It's a dangerous way to launch, but no one is aboard either craft, so that might work. spike From msd001 at gmail.com Wed Feb 11 02:54:53 2015 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 10 Feb 2015 21:54:53 -0500 Subject: [ExI] musings on teaching In-Reply-To: References: Message-ID: On Feb 10, 2015 4:40 PM, "William Flynn Wallace" wrote: > * In an intro course once I got bored and starting deviating from the book and from the principles etc. that I was supposed to be teaching. I got crazier and crazier for about 15 minutes before one very hesitant girl raised her hand and asked if I was really sure about that last thing. I said that for 15 minutes I had been shoveling bullshit. I told the very angry class that they had to be able to recognize bullshit as part of their essential education and they just had a taste of it. Still weren't too thrilled at being duped. Who is? I had an elementary school teacher burn a half-day in December while we wrote heartfelt letters to the Mince farmer asking him to spare the poor creatures rather than turning their meat into pies. It was only after collecting this "assignment" that the teacher shared the truth about mincemeat. That was the last time I ever completely trusted a teacher. I don't remember her, but it was a good lesson. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Wed Feb 11 05:25:11 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 11 Feb 2015 00:25:11 -0500 Subject: [ExI] most common jobs In-Reply-To: References: <028d01d043ee$f7f09140$e7d1b3c0$@att.net> Message-ID: On Sun, Feb 8, 2015 at 6:40 PM, BillK wrote: > And the rich may feel > that paying a small tax > > If entertainment, drugs and social activities are available with > sufficient basic income > And the poor cities can ... provide minimal > services. > ### Interpreting the words "small", "sufficient" and "minimal" is they key point of prognostic exercises here: Let's say, thanks to automation there may be another 2 order of magnitude increase in per capita income (see http://en.wikipedia.org/wiki/World_economy#mediaviewer/File:World_GDP_Per_Capita_1500_to_2000,_Log_Scale.png ) but occurring over the next 50 rather than 500 years. Let's say that the top 1% of earners would continue to take about 20% of total income, unchanged from current situation. In that case, their total income would be equal to 2,000% of current total GDP (i.e. 20 times more than what all Americans earn today). Let's say that the effective Federal tax rate remained at 20%, roughly what it is now (there is some hokus-pokus here but not important for my point). Twenty percent out of 20 times current GDP is 4 times current US GDP, or make it about 70 trillion dollars. Assuming a lot of immigration, there will be 700 million Americans in 2070. So, the small, 20% tax on only a tiny sliver of the population could provide an income subsidy of 100,000 dollars per year (after tax), expressed as real income in 2015 dollars. There may be many persons who would protest that 100,000 dollars a year is a pittance, and an insult, too, way below minimal. I could imagine that in the next 50 years loud voices would demand a cool million dollars per year per person as the absolute minimum they would accept from the despicable capitalist pigs who make all this possible. It will be a different world, and yet I am sure that in many ways it could be eerily similar to today's. ----------------- > > Might turn out OK. Yep. Might turn out OK. Rafal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Wed Feb 11 05:39:44 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 11 Feb 2015 00:39:44 -0500 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: On Mon, Feb 9, 2015 at 3:10 PM, BillK wrote: > The rocket's design is also unusual, mounting four engines just below > the payload on the vehicle. The engines are used for the first and > second stages of the rocket, with propellant tanks below the engines > dropping away when exhausted. This approach avoids the expense and > complexity of separate sets of engines for the first two stages. ### Interesting. Imagine a bladder made of sturdy carbon-fiber foil, suspended from a wing-shaped bar with attached rocket engines and with payload on top. During launch the bladder is being pulled rather than squished, and we all know that tensile strength of long and thin objects is amazing compared to their compressive strength. So the bladder would have a tiny weight compared to a stiff fuel tank of the same capacity. The vehicle would be suspended from a gantry for launch, its bladder almost touching the ground. The massive booster engines would run at full power all the way to LE orbit. Bladder would be jettisoned and promptly burn up on re-entry. Engines and their wing glide to an unpowered landing and the payload capsule flies wherever it's supposed to. Spike, isn't this a neat idea? :) Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 11 06:32:36 2015 From: spike66 at att.net (spike) Date: Tue, 10 Feb 2015 22:32:36 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: <02e401d045c4$878cc2c0$96a64840$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Rafal Smigrodzki Sent: Tuesday, February 10, 2015 9:40 PM To: ExI chat list Subject: Re: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads On Mon, Feb 9, 2015 at 3:10 PM, BillK wrote: The rocket's design is also unusual, mounting four engines just below the payload on the vehicle. The engines are used for the first and second stages of the rocket, with propellant tanks below the engines dropping away when exhausted. This approach avoids the expense and complexity of separate sets of engines for the first two stages. ### Interesting. Imagine a bladder made of sturdy carbon-fiber foil, suspended from a wing-shaped bar with attached rocket engines and with payload on top. During launch the bladder is being pulled rather than squished, and we all know that tensile strength of long and thin objects is amazing compared to their compressive strength. So the bladder would have a tiny weight compared to a stiff fuel tank of the same capacity. >?The vehicle would be suspended from a gantry for launch, its bladder almost touching the ground. The massive booster engines would run at full power all the way to LE orbit. Bladder would be jettisoned and promptly burn up on re-entry. Engines and their wing glide to an unpowered landing and the payload capsule flies wherever it's supposed to. >?Spike, isn't this a neat idea? :) Rafa? Ja! I like the notion of an air-breathing first stage if there is any way to get that done. Reason: a lot of structural design weight is in handling the aerodynamic loads of supersonic flight while still fairly low in the atmosphere. If we could work out a means of climbing out at a leisurely couple hundred knots, then accelerating on up to just below Mach 1 when you get up around 0.2-ish atmospheres, you get to save weight in structure, in the nose radome, and so forth. With any rocket, you need to accelerate hard; otherwise the gravity losses are too great. As you pointed out, structurally there are big advantages to tensile stresses rather than compressive, but you need to consider bending moment rigidity in either case. I don?t know what happens to your control model if the rocket flexes very much. I have seen that any nozzle-forward rocket design loves to go into oscillating instability. Somewhere I saw a video of Robert Goddard?s first liquid fueled rocket, which was nozzle-forward. It went unstable. Like everything in control theory, it seems like systems do the opposite of what you would think. Nozzle forward seems stable to me: center of thrust way forward of the center of mass and center of pressure. That looks to me like it would fly straight, but it doesn?t. Nearly everyone here as seen and played with toy rockets. For some odd reason those things do fly straight, with the center of thrust well aft of the center of gravity. In any case, I am eager to see what DARPA can do with this idea. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Feb 11 10:40:57 2015 From: pharos at gmail.com (BillK) Date: Wed, 11 Feb 2015 10:40:57 +0000 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: <024301d04596$73915660$5ab40320$@att.net> References: <920eadbc55563d7d6cb1c05111731ac3@ultimax.com> <024301d04596$73915660$5ab40320$@att.net> Message-ID: On 11 February 2015 at 01:02, spike wrote: > The Pegasus is launched from an airplane. What I see are the contributions > of this DARPA idea is a nozzles-forward design which allows the same nozzles > to be used all the way up. > > Ja sounds like something was missed. In principle you can have two reactive > gases in the same container. This happens thousands of times per second in > your car engine. They are well mixed, under pressure and hot but don't > combust until the spark plug starts the reaction. > > However in this case, Acetylene and nitric can't be mixed. The versions I > have seen of that are hypergolic as all hell, however I can imagine them as > liquids being mixed, then vaporized in the combustion chamber. Nitric oxide > is a lot easier to keep liquid than is hydrogen and acetylene is easier to > keep liquid than is LOX. > > DARPA haven't actually tested the new mono-propellant yet - as at 5 Feb 2015. See: Quote: Perhaps the most daring technology ALASA seeks to implement is a new high-energy monopropellant, which aims to combine fuel and oxidizer into a single liquid. If successful, the monopropellant would enable simpler designs and reduced manufacturing and operation costs compared to traditional designs that use two liquids, such as liquid hydrogen and liquid oxygen. ------------- So they still have to draw lots for someone to light the blue touch paper and stand well back. BillK From henrik.ohrstrom at gmail.com Tue Feb 10 15:22:23 2015 From: henrik.ohrstrom at gmail.com (Henrik Ohrstrom) Date: Tue, 10 Feb 2015 16:22:23 +0100 Subject: [ExI] VR content In-Reply-To: <013801d038d5$cee02550$6ca06ff0$@att.net> References: <569066058-14844@secure.ericade.net> <00a801d038c1$e097e940$a1c7bbc0$@att.net> <013801d038d5$cee02550$6ca06ff0$@att.net> Message-ID: AR content is Hard, hardware is crap. I have a Meta1 (physical access! Mine !) and it is very close to unusable :( It works as promised but no, it will be a miracle if it can be clinically useful. I will try with it since it is so hard to get any AR/VR stuff at all. That have any useful qualities at all anyway. So we do not have hardware, not even Google glass, and even less user interface, we need an AR iPhone revolution. /henrik -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Thu Feb 12 01:30:16 2015 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 11 Feb 2015 20:30:16 -0500 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: <02e401d045c4$878cc2c0$96a64840$@att.net> References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> <02e401d045c4$878cc2c0$96a64840$@att.net> Message-ID: On Feb 11, 2015 1:48 AM, "spike" wrote: > Like everything in control theory, it seems like systems do the opposite of what you would think. Nozzle forward seems stable to me: center of thrust way forward of the center of mass and center of pressure. That looks to me like it would fly straight, but it doesn?t. Nearly everyone here as seen and played with toy rockets. For some odd reason those things do fly straight, with the center of thrust well aft of the center of gravity. Having seen the video of the weather balloon launched iPhone, I am curious if there is any application for balloons to reach upper atmosphere. Given some of the new aerogel materials there might be new designs possible - you know, every few decades reexamine old ideas that weren't possible to see if they can work now. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Thu Feb 12 06:11:25 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Thu, 12 Feb 2015 01:11:25 -0500 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: <02e401d045c4$878cc2c0$96a64840$@att.net> References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> <02e401d045c4$878cc2c0$96a64840$@att.net> Message-ID: On Wed, Feb 11, 2015 at 1:32 AM, spike wrote: > > > > >?Spike, isn't this a neat idea? :) Rafa? > > Ja! I like the notion of an air-breathing first stage if there is any way > to get that done. Reason: a lot of structural design weight is in handling > the aerodynamic loads of supersonic flight while still fairly low in the > atmosphere. If we could work out a means of climbing out at a leisurely > couple hundred knots, then accelerating on up to just below Mach 1 when you > get up around 0.2-ish atmospheres, you get to save weight in structure, in > the nose radome, and so forth. > ### OK, so no launch gantry - instead a White Knight on steroids, huge jet-propelled carrier engineered for reaching maximum feasible flight height and speed, probably a delta-winged monstrosity bristling with engines, requiring a huge runway but capable of going well into the stratosphere supersonically while loaded with a B-52's worth of cargo. Then, you release the spaceship, with the wobbly two-wall bladder (for fuel and oxidiser) and the rocket engines with clever high-speed throttles and dampers that counteract the wobbles in the bladder and the load-bearing wing to which all is attached, and presto, you are in space! Or imagine this - the fuel bladder is towed on a short umbilical. Have you seen these toy rockets with an attached stick? The stick stabilizes the rocket, and a fuel bladder in tow could wobble more than a rigidly attached fuel bladder, with less impact on the engines. The reason why I am thinking about single-use fuel bladders is that I know about the single-use cell culture bags used by biotech companies - they are capable of growing huge volumes of cells using a much smaller amount of plastic, compared to standard bioreactors. Cross-fertilization of disciplines! Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Feb 12 19:24:30 2015 From: johnkclark at gmail.com (John Clark) Date: Thu, 12 Feb 2015 14:24:30 -0500 Subject: [ExI] Cool robot dog Message-ID: http://techcrunch.com/2015/02/09/google-spot-dog-robot/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilia.stambler at gmail.com Fri Feb 13 13:26:43 2015 From: ilia.stambler at gmail.com (Ilia Stambler) Date: Fri, 13 Feb 2015 15:26:43 +0200 Subject: [ExI] The Critical Need to Promote Research of Aging (press release ISOAD) Message-ID: *Re: The Critical Need to Promote Research of Aging and Aging-related Diseases to Improve Health and Longevity of the Elderly Population *(press release ISOAD) (Original http://isoad.org/content.aspx?info_lb=640&flag=571 ) Dear colleagues, I would like to draw your attention to the following press release of the International Society on Aging and Disease (ISOAD). On November 1-2, 2014, there took place in Beijing, China, the first International Conference on Aging and Disease (ICAD) of the International Society on Aging and Disease (ISOAD, http://isoad.org/). It showcased some of the latest advances in aging and longevity research, including regenerative medicine, geroprotective substances and regimens. The range of cutting edge, often breakthrough, topics and advances from over 60 presenters from around the world can be seen in the conference program http://isoad.org/content.aspx?info_lb=606&flag=103. The conference report, entitled ?Stop Aging Disease!? (which is also the ISOAD official motto) was recently published in the ISOAD journal *Aging and Disease*, http://www.aginganddisease.org/EN/10.14336/AD.2015.0115 briefly describing a variety of the fields presented, from modulation of energy balance, through toxicology, genomics, proteomics and immunotherapy to systems biology, behavioral therapy and health research policy, aimed to achieve healthy longevity. The conference provided yet another illustration of the great promise of longevity research. Yet a much greater effort and investment will be needed to bring advances in fundamental science toward safe, effective and universally accessible treatments for age-related ill health. Therefore, the conference further emphasized the vital importance of public support of research on biology of aging and aging-related diseases for public health, and offered some policy recommendations for its promotion. The rationale and recommendations can be found in the conference resolution http://isoad.org/content.aspx?info_lb=638&flag=103 and in the more detailed position paper, published on behalf of the ISOAD, following the conference, entitled ?The Critical Need to Promote Research of Aging and Aging-related Diseases to Improve Health and Longevity of the Elderly Population? http://www.aginganddisease.org/EN/10.14336/AD.2014.1210. The policy recommendations include increasing funding, specific incentives and institutional support for aging and longevity research. We invite the public to contribute to the widest possible recognition and support of biological research of aging and aging-related diseases. We welcome the readers to circulate this position paper, share it in your social networks, forward it to politicians, potential donors and media, organize discussion groups to debate the topics raised (that may later grow into grassroots longevity research and activism groups in different countries), translate this position paper into your language, reference and link to it, even republish it in part or in full, join the ISOAD or other aging and longevity research and advocacy organizations. Consider focusing the discussions and promotions on special days of symbolic significance, such as February 21 ? the 140th anniversary of the longest-lived human, Jeanne Calment (reaching the lifespan of 122 years), March 1 ? The Future Day, April 7 - the UN World Health Day, May 15 ? the 170th anniversary of the founder of scientific aging and longevity research, the author of the term ?gerontology?, the Nobel Prize winner ? Ellie Metchnikoff, October 1 ? the UN International Day of Older Persons (celebrated by some parts of the longevity advocacy community as the ?International Longevity Day?), November 10 ? the UN World Science Day for Peace and Development, etc. Creating discussions, meetings and publications on aging and longevity research on several consecutive days or regularly, may only increase the impact. Hopefully, thanks to our joint efforts ?The Critical Need to Promote Research of Aging and Aging-related Diseases to Improve Health and Longevity of the Elderly Population? will be recognized and acted upon by all the segments of the society, from the grassroots through the professional to the decision-making level, with the effort corresponding to the urgency of the need. Ilia Stambler, PhD. Outreach Coordinator. International Society on Aging and Disease (ISOAD) http://isoad.org/ http://isoad.org/content.aspx?info_lb=640&flag=571 http://www.longevityforall.org/ http://www.bioaging.org.il/ http://www.longevityhistory.com/ -- Ilia Stambler, PhD Outreach Coordinator. International Society on Aging and Disease - ISOAD http://isoad.org Chair. Israeli Longevity Alliance / International Longevity Alliance (Israel) - ILA http://www.bioaging.org.il Coordinator. Longevity for All http://www.longevityforall.org Author. Longevity History. *A History of Life-Extensionism in the Twentieth Century* http://longevityhistory.com Email: ilia.stambler at gmail.com Tel: 972-3-961-4296 / 0522-283-578 Rishon Lezion. Israel -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 13 14:07:30 2015 From: pharos at gmail.com (BillK) Date: Fri, 13 Feb 2015 14:07:30 +0000 Subject: [ExI] The Critical Need to Promote Research of Aging (press release ISOAD) In-Reply-To: References: Message-ID: On 13 February 2015 at 13:26, Ilia Stambler wrote: > I would like to draw your attention to the following press release of the > International Society on Aging and Disease (ISOAD). > > Hopefully, thanks to our joint efforts "The Critical Need to Promote > Research of Aging and Aging-related Diseases to Improve Health and Longevity > of the Elderly Population" will be recognized and acted upon by all the > segments of the society, from the grassroots through the professional to the > decision-making level, with the effort corresponding to the urgency of the > need. > Well, a good start would be to get hospitals and the medical profession to stop killing people. In the UK, the NHS has recently proposed that it would be a good idea to try to stop avoidable deaths in hospitals due to poor care. Quote: New plans to reduce the number of "avoidable deaths" in English hospitals have been unveiled by Health Secretary Jeremy Hunt. The existing estimate of 12,000 annual avoidable deaths in the NHS was seen by the health secretary as broadly in line with similar healthcare systems abroad, such as those in France and Germany, but was still too high. ------------------------ The situation in the US is worse. Quote: How to Stop Hospitals From Killing Us Medical errors kill enough people to fill four jumbo jets a week. A surgeon with five simple ways to make health care safer. Sept. 21, 2012 -------------------- BillK From avant at sollegro.com Sat Feb 14 03:49:22 2015 From: avant at sollegro.com (Stuart LaForge) Date: Fri, 13 Feb 2015 19:49:22 -0800 Subject: [ExI] Zombie glutamate Message-ID: In trying follow another thread, I kept running into a term I can't process: What in Darwin's name is zombie glutamate? Is it the salt you get by reacting an alkaline zombie with glutamic acid? And what is zombie red for that matter? I know what philosophic zombies are but since when did zombie become an adjective? And most importantly what does it *mean*? Stuart LaForge Sent from my phone. From stathisp at gmail.com Sat Feb 14 09:01:20 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 14 Feb 2015 20:01:20 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: On 14 February 2015 at 14:49, Stuart LaForge wrote: > > In trying follow another thread, I kept running into a term I can't process: > > What in Darwin's name is zombie glutamate? > > Is it the salt you get by reacting an alkaline zombie with glutamic acid? And what is zombie red for that matter? I know what philosophic zombies are but since when did zombie become an adjective? > > And most importantly what does it *mean*? > > Stuart LaForge I think I came up with "zombie glutamate" in response to Brent Allsop. Brent believes that consciousness is not due to the system structure but rather to the substrate. His example is that the neurotransmitter glutamate may be responsible for red sensations (this is just an example, Brent makes clear, not the actual role of glutamate). So if you substitute glutamate for some functional analogue, you eliminate or change the red sensations, even though the subject may be able to recognise that what he is seeing is supposed to be red and can talk about it as if he does see red - "zombie red". But I don't agree with this: I think that if you replace the glutamate with a functional analogue such as glutamate made of different isotopes to the ones normally present ("zombie glutamate") the subject's brain will function exactly the same and the subject will experience and report experiencing exactly the same sensations. In fact, I don't believe philosophical zombies are possible. -- Stathis Papaioannou From brent.allsop at canonizer.com Sat Feb 14 12:36:17 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 14 Feb 2015 05:36:17 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: <54DF4141.1030405@canonizer.com> Hi Stuart, I would normally point a person to the "Detecting Qualia" paper to answer this question: https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit But then, if you know as much as you know, you must have already read the paper? I don't know if I've used the exact term "Zombie Glutamate", so someone else must have used this for the first time, after reading and understanding the paper. Did you hear this term from someone else, or are you the first to think of or use it? It's a good term. It makes me happy that people are starting to understand things to this level to be able to use this terminology in new and powerful ways like this. The answer to this is described in the "Intrinsic Qualia verses Representations of Qualia" section of the paper. The testable prediction is that something in physics is responsible for a qualitative property of our conscious knowledge, like the taste of salt or the color red. The falsifiable prediction is that it could be something like the neurotransmitter glutamate that has the intrinsic redness color we can experience. All of our physical senses and scientific instruments that detect qualia, produce representations of glutamate, that are only like glutamate, in that they can be interpreted as if they were glutamate. In the same way, we have knowledge of glutamate. But glutamate, in it's crystalline form, reflects white light, so we represent it with knowledge that has a whiteness quality. Or worse, we represent it as something that has no quality at all. Both of which could miss interpretations of the real redness quality of glutamate. (Or whatever is responsible for a real redness quality.) The bottom line is, the testable theoretical prediction is that all physical science done today, is qualitatively naieve zombie physical science, simply because we don't know how to interpret the zombie information we have about such things as glutamate. Both zombie information and qualitative information systems can model, simulate, and represent the other. But, as long as you don't have real glutamate, and you have something that is just being interpreted, as if it was the real thing, unless you know how to qualitatively interpret what it represents, it is zombie glutamate. Let me know what, if any of the above, does, or does not help. Brent Allsop On 2/13/2015 8:49 PM, Stuart LaForge wrote: > In trying follow another thread, I kept running into a term I can't process: > > What in Darwin's name is zombie glutamate? > > Is it the salt you get by reacting an alkaline zombie with glutamic acid? And what is zombie red for that matter? I know what philosophic zombies are but since when did zombie become an adjective? > > And most importantly what does it *mean*? > > Stuart LaForge > > > Sent from my phone. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From johnkclark at gmail.com Sat Feb 14 15:38:15 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 14 Feb 2015 10:38:15 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: On Sat, Feb 14, 2015 Stathis Papaioannou wrote: > > Brent believes that consciousness is not due to the system structure but > rather to the substrate. If computers with the same logical structure but with different substrates (say vacuums tubes, Germanium transistors and Silicon transistors) all came up with different answers when you multiplied 27 by 54 I'd have a lot more confidence that this theory is correct, but they don't, they all come up with 1458. Actually calling this a theory is giving it too much credit as there is no experiment that can be performed to prove it wrong, or even a experiment that would allow you to learn a little more about it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Feb 14 15:57:23 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 02:57:23 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: On Sunday, February 15, 2015, John Clark wrote: > On Sat, Feb 14, 2015 Stathis Papaioannou > wrote: > >> > > Brent believes that consciousness is not due to the system structure but >> rather to the substrate. > > > If computers with the same logical structure but with different substrates > (say vacuums tubes, Germanium transistors and Silicon transistors) all > came up with different answers when you multiplied 27 by 54 I'd have a lot > more confidence that this theory is correct, but they don't, they all come > up with 1458. Actually calling this a theory is giving it too much credit > as there is no experiment that can be performed to prove it wrong, or even > a experiment that would allow you to learn a little more about it. > There are experiments that can be performed - try a physically different, but chemically identical substrate using alternative isotopes. It would work just the same, proving that working just the same is what is important, and not what the parts are made of. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Sat Feb 14 16:40:04 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Sat, 14 Feb 2015 10:40:04 -0600 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: Here is my question: even if you could identify the exact electrical and chemical goings-on in the brain when a person experiences some sensation, what exactly will you know? How will it help you know something else? We know, for ex., that the amygdala is involved in emotions, particularly anger. How does that help us understand anger and how to deal with it? I am not saying that it is a waste of time, I just want to know what you intend to do with the answers? bill w On Sat, Feb 14, 2015 at 9:57 AM, Stathis Papaioannou wrote: > > > On Sunday, February 15, 2015, John Clark wrote: > >> On Sat, Feb 14, 2015 Stathis Papaioannou wrote: >> >>> >> > Brent believes that consciousness is not due to the system >>> structure but rather to the substrate. >> >> >> If computers with the same logical structure but with different >> substrates (say vacuums tubes, Germanium transistors and Silicon >> transistors) all came up with different answers when you multiplied 27 by >> 54 I'd have a lot more confidence that this theory is correct, but they >> don't, they all come up with 1458. Actually calling this a theory is >> giving it too much credit as there is no experiment that can be performed >> to prove it wrong, or even a experiment that would allow you to learn a >> little more about it. >> > > There are experiments that can be performed - try a physically different, > but chemically identical substrate using alternative isotopes. It would > work just the same, proving that working just the same is what is > important, and not what the parts are made of. > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Feb 14 18:23:38 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 14 Feb 2015 13:23:38 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: On Sat, Feb 14, 2015 at 10:57 AM, Stathis Papaioannou wrote: >>> Brent believes that consciousness is not due to the system >>> structure but rather to the substrate. >> >> >> >> If computers with the same logical structure but with different >> substrates (say vacuums tubes, Germanium transistors and Silicon >> transistors) all came up with different answers when you multiplied 27 by >> 54 I'd have a lot more confidence that this theory is correct, but they >> don't, they all come up with 1458. Actually calling this a theory is >> giving it too much credit as there is no experiment that can be performed >> to prove it wrong, or even a experiment that would allow you to learn a >> little more about it. >> > > > There are experiments that can be performed - try a physically > different, but chemically identical substrate using alternative isotopes. > t would work just the same > That would just show what we already know, that intelligent behavior would not be effected; but how could you demonstrate that consciousness was not changed? You couldn't. That's why consciousness theories, as opposed to intelligence theories, are so popular on the internet, they're easy because they don't have to actually do anything or explain any experimental results, they can just prattle on and on and one consciousness theory works just as well (or badly) as another. It would take intelligence and a lot of it to come up with a good intelligence theory; but a consciousness theoretician doesn't need that, intelligence is just optional for him. Consciousness is easy but intelligence is hard. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Sat Feb 14 18:46:22 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sat, 14 Feb 2015 19:46:22 +0100 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: Considering yourself as something very special, which can function only on the so called "organic chemistry as it is" - and not for example on some rudimentary but complex enough Turing machine ... that's quite preposterous and naive, As the atoms weren't very rudimentary as well. Once your substrate allows a decent complexity - you are on! We need complexity, not a particular chemistry. Fine, if it's the chemistry which gives us that complexity. But being a chemo-biological chauvinist - that's silly. What bothering me, is the real possibility, that the complexity required isn't that high either. That I can be reloaded just too easily. On Sat, Feb 14, 2015 at 7:23 PM, John Clark wrote: > > > On Sat, Feb 14, 2015 at 10:57 AM, Stathis Papaioannou > wrote: > > >>> Brent believes that consciousness is not due to the system >>>> structure but rather to the substrate. >>> >>> >>> >> If computers with the same logical structure but with different >>> substrates (say vacuums tubes, Germanium transistors and Silicon >>> transistors) all came up with different answers when you multiplied 27 by >>> 54 I'd have a lot more confidence that this theory is correct, but they >>> don't, they all come up with 1458. Actually calling this a theory is >>> giving it too much credit as there is no experiment that can be performed >>> to prove it wrong, or even a experiment that would allow you to learn a >>> little more about it. >>> >> >> > There are experiments that can be performed - try a physically >> different, but chemically identical substrate using alternative >> isotopes. t would work just the same >> > > That would just show what we already know, that intelligent behavior would > not be effected; but how could you demonstrate that consciousness was not > changed? You couldn't. That's why consciousness theories, as opposed to > intelligence theories, are so popular on the internet, they're easy because > they don't have to actually do anything or explain any experimental > results, they can just prattle on and on and one consciousness theory works > just as well (or badly) as another. It would take intelligence and a lot of > it to come up with a good intelligence theory; but a consciousness > theoretician doesn't need that, intelligence is just optional for him. > Consciousness is easy but intelligence is hard. > > John K Clark > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sat Feb 14 19:22:06 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 14 Feb 2015 12:22:06 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: <54DFA05E.3070107@canonizer.com> Hello Folks, Tomaz, William, and many others, it seems to me from what they say, still don't understand what a redness quale is or is not. Nor do they understand what zombie glutamate is or is not. Maybe they have red the paper: https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit But if they have, I see no evidence that they understand what it is about. Stathis, it seems to me, at least understands what a redness quale is, and us not. But he is still missing the most important part of what I am trying to say, as proven when he states: > Brent believes that consciousness is not due to the system structure but rather to the substrate. this is absolutely wrong. I fully accept that it is possible that the system structure may be responsible for an elemental redness quale we can experience. What I am describing is, how can you prove to everyone, qualitatively, experimentally, whether it is system structure, some kind of a particular material, like glutamate, some kind of quantum weirdness, or maybe some kind of recursively complex system that enables a 'redness quality to arise'. I am open to any of these as being possible theories that science is about to prove correct using the qualitative methods I am describing. What I am describing is how do you theoretically bride the qualitative information gap, so you can prove who has it right, and who has it wrong. All I am saying is that whatever is responsible for us experiencing a redness qaule, is detectable, by doing the same technique that our brain does, when we are able to detect that we have redness knowledge vs greenness knowledge. All I am doing is describing the qualitative theory that will enable that, by getting around the quale interpretation problem. The only reason I don't use a system theory, in the idealized 3 color world example, as required to bridge the qualitative information gap with a falsifiable theory, is because nobody has described any possible falsifiable way, which a system could cause a redness quale to arise. Once you provide that, I will be happy to use that in the falsifiable idealized effing theory example to better communicate to you how your belief that there is a 'hard' problem is completely wrong. Bill W asked: "I am not saying that it is a waste of time, I just want to know what you intend to do with the answers?" Right now, the prediction is that all of theoretical physical science is nothing but qualitatively blind zombie science. Theoretical physical scientists are spending billion of $$$ on things like the large hadron collier, and getting almost nothing. Yet they are completely missing the qualitative nature of physics, just like the brilliant scientist Mary that hasn't stepped out of the black and white zombie room yet, and experienced, for herself, what real glutamate is like and how it is different from zombie glutamate. In my opinion, the greatest discovery in physics, ever, will be the discovery of its qualitative natures. It will include the realization that all of our science, to date, is just zombie science. You don't need to spend billions of dollars to achieve that greatest discovery in physicists ever. You just need to answer the question you are asking, so people will realize the significance and importance of knowing that a sunset, likely has a qualitative nature about it. And your knowledge of the sunset, as phenomenally glorious as it is, has nothing to do with that. More than any other discovery in physics, discovering nature's qualitative properties. so we can do more than zombie science, so we can do qualitative science of redness, will we be able to finally answer the questions: What are we, what are spirits, what is consciousness, and most importantly, what is it all qualitatively like. And if you can't see why that is important, then, oh well, I have no hope for you, and you ability to understand what uploaded Godly consciousness could be qualitatively like, or basically what heaven will be like, in the near future. Uploads are going to be a lot more than zombie uploads. Brent Allsop On 2/14/2015 9:40 AM, William Flynn Wallace wrote: > Here is my question: even if you could identify the exact electrical > and chemical goings-on in the brain when a person experiences some > sensation, what exactly will you know? How will it help you know > something else? We know, for ex., that the amygdala is involved in > emotions, particularly anger. How does that help us understand anger > and how to deal with it? I am not saying that it is a waste of time, > I just want to know what you intend to do with the answers? > > bill w > > On Sat, Feb 14, 2015 at 9:57 AM, Stathis Papaioannou > > wrote: > > > > On Sunday, February 15, 2015, John Clark > wrote: > > On Sat, Feb 14, 2015 Stathis Papaioannou > wrote: > > > Brent believes that consciousness is not due to the > system structure but rather to the substrate. > > > If computers with the same logical structure but with > different substrates (say vacuums tubes, Germanium > transistors and Silicon transistors) all came up with > different answers when you multiplied 27 by 54 I'd have a lot > more confidence that this theory is correct, but they don't, > they all come up with 1458. Actually calling this a theory is > giving it too much credit as there is no experiment that can > be performed to prove it wrong, or even a experiment that > would allow you to learn a little more about it. > > > There are experiments that can be performed - try a physically > different, but chemically identical substrate using alternative > isotopes. It would work just the same, proving that working just > the same is what is important, and not what the parts are made of. > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Sat Feb 14 21:44:47 2015 From: avant at sollegro.com (avant at sollegro.com) Date: Sat, 14 Feb 2015 13:44:47 -0800 Subject: [ExI] Zombie glutamate Message-ID: On Sat, February 14, 2015 4:00 am, Stathis Papaioannou wrote: > I think I came up with "zombie glutamate" in response to Brent Allsop. > Brent believes that consciousness is not due to the system structure > but rather to the substrate. Well certainly. Structure is fundamental to all biological systems regardless of whether you are talking about how suited and organism is to its environment, the function of a beating heart, or how the structure of an enzyme's active site forces reluctant atoms together to form an unlikely chemical bond. In biological systems, form *is* precisely function. While Brent is not completely wrong because susbtrates do have very specific structures that enable their function, the structural considerations outweigh the simple identity of the substrate. For example a hemoglobin molecule denatured by heat would still chemically be hemoglobin, but it will have lost its delicate folded structure and thereby all of its biological function. With regard to consciousness, while it is possible that a different subsrate might work, but in my opinion, it would have to be a damn close structural analog. If you want to simulate the mind, you would have to simulate the human brain from the atoms up along with any attendant chemistry and physics. You might even have to simulate the rest of the body as well, after all, I wouldn't feel quite like myself without my adrenal glands or my testicles subtly influencing my thinking. I don't think biological systems can tolerate much abstraction or heuristic shortcutting. > His example is that the neurotransmitter > glutamate may be responsible for red sensations (this is just an example, > Brent makes clear, not the actual role of glutamate). So if > you substitute glutamate for some functional analogue, you eliminate or > change the red sensations, even though the subject may be able to > recognise that what he is seeing is supposed to be red and can talk about > it as if he does see red - "zombie red". Well as long as the structural/functional analog is close enough, there shouldn't be any detectable difference in the outcome. When your friend pays you a visit, it makes no difference if he drives a car or rides a bike, he is still the same friend when he arrives at your door. > But I don't agree with this: I > think that if you replace the glutamate with a functional analogue such as > glutamate made of different isotopes to the ones normally present ("zombie > glutamate") the subject's brain will function exactly the same and the > subject will experience and report experiencing exactly the same > sensations. In fact, I don't believe philosophical zombies are possible. I don't believe in zombies either and while a simulation could be very innaccurate, the idea of something that reacted to stimulus in a consciouss manner being completely bereft of consciousness is silly. Indeed, I think philosophical zombies are a politically danegrous idea. If you can't tell a person from a zombie by their outward behavior, it sounds like a perfect excuse to disenfranchise and enslave them. Somewhat reminiscent of the way blacks, despite looking human and acting human have historically been classified as subhuman with the intent of depriving them of their rights. With the establishment tasking a whole generation of biologists to find anatomical differences to justify the double standard. Stuart LaForge From stathisp at gmail.com Sat Feb 14 21:58:29 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 08:58:29 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: On Sunday, February 15, 2015, William Flynn Wallace wrote: > Here is my question: even if you could identify the exact electrical and > chemical goings-on in the brain when a person experiences some sensation, > what exactly will you know? How will it help you know something else? We > know, for ex., that the amygdala is involved in emotions, particularly > anger. How does that help us understand anger and how to deal with it? I > am not saying that it is a waste of time, I just want to know what you > intend to do with the answers? > > bill w > We have treatments for neurological and psychiatric conditions that are based on an understanding of the brain. > On Sat, Feb 14, 2015 at 9:57 AM, Stathis Papaioannou > wrote: > >> >> >> On Sunday, February 15, 2015, John Clark > > wrote: >> >>> On Sat, Feb 14, 2015 Stathis Papaioannou wrote: >>> >>>> >>> > Brent believes that consciousness is not due to the system >>>> structure but rather to the substrate. >>> >>> >>> If computers with the same logical structure but with different >>> substrates (say vacuums tubes, Germanium transistors and Silicon >>> transistors) all came up with different answers when you multiplied 27 by >>> 54 I'd have a lot more confidence that this theory is correct, but they >>> don't, they all come up with 1458. Actually calling this a theory is >>> giving it too much credit as there is no experiment that can be performed >>> to prove it wrong, or even a experiment that would allow you to learn a >>> little more about it. >>> >> >> There are experiments that can be performed - try a physically different, >> but chemically identical substrate using alternative isotopes. It would >> work just the same, proving that working just the same is what is >> important, and not what the parts are made of. >> >> >> -- >> Stathis Papaioannou >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Feb 14 22:18:20 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 14 Feb 2015 17:18:20 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: <54DFA05E.3070107@canonizer.com> References: <54DFA05E.3070107@canonizer.com> Message-ID: On Sat, Feb 14, 2015 Brent Allsop wrote: > Tomaz, William, and many others, it seems to me from what they say, still > don't understand what a redness quale is or is not. > You keep saying stuff like that but I would maintain that there is not a single member of this list who believes that redness and electromagnetic waves with a wavelength of 650 nm are the same thing. And they didn't need your paper to figure it out. > how can you prove to everyone, qualitatively, experimentally, whether it > is system structure > You can't. You can't prove anything about consciousness to a third party and so obsessing over it is a lot like video games, a complete waste of time. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Feb 14 22:37:19 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 09:37:19 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: > On 15 Feb 2015, at 5:23 am, John Clark wrote: > > > > On Sat, Feb 14, 2015 at 10:57 AM, Stathis Papaioannou wrote: > >>>> >>> Brent believes that consciousness is not due to the system structure but rather to the substrate. >>> >>> >> If computers with the same logical structure but with different substrates (say vacuums tubes, Germanium transistors and Silicon transistors) all came up with different answers when you multiplied 27 by 54 I'd have a lot more confidence that this theory is correct, but they don't, they all come up with 1458. Actually calling this a theory is giving it too much credit as there is no experiment that can be performed to prove it wrong, or even a experiment that would allow you to learn a little more about it. >> >> > There are experiments that can be performed - try a physically different, but chemically identical substrate using alternative isotopes. t would work just the same > > That would just show what we already know, that intelligent behavior would not be effected; but how could you demonstrate that consciousness was not changed? You couldn't. You *could* show that, for if the consciousness were changed but not the behaviour, that would lead to absurdity. It would mean that you could go blind but not notice you were blind and behave as if you had normal vision, or you could go blind but stand by helplessly while your body behaved normally and declared that everything looked normal. > That's why consciousness theories, as opposed to intelligence theories, are so popular on the internet, they're easy because they don't have to actually do anything or explain any experimental results, they can just prattle on and on and one consciousness theory works just as well (or badly) as another. It would take intelligence and a lot of it to come up with a good intelligence theory; but a consciousness theoretician doesn't need that, intelligence is just optional for him. Consciousness is easy but intelligence is hard. > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Feb 14 22:40:31 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 09:40:31 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> Message-ID: <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> > On 15 Feb 2015, at 9:18 am, John Clark wrote: > >> On Sat, Feb 14, 2015 Brent Allsop wrote: >> >> > Tomaz, William, and many others, it seems to me from what they say, still don't understand what a redness quale is or is not. > > You keep saying stuff like that but I would maintain that there is not a single member of this list who believes that redness and electromagnetic waves with a wavelength of 650 nm are the same thing. And they didn't need your paper to figure it out. > >> > how can you prove to everyone, qualitatively, experimentally, whether it is system structure > > You can't. You can't prove anything about consciousness to a third party and so obsessing over it is a lot like video games, a complete waste of time. What you can prove is that IF a being is conscious THEN its functional equivalent would also be conscious. This implies that consciousness is a necessary side-effect of certain types of behaviour. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Sat Feb 14 23:17:42 2015 From: anders at aleph.se (Anders Sandberg) Date: Sun, 15 Feb 2015 00:17:42 +0100 Subject: [ExI] Transhumanist valentines Message-ID: <2318667712-22999@secure.ericade.net> Today, as is my habit, I sent my husband a nerdy valentines picture. There is certainly no shortage of nerdy cards: http://www.demilked.com/nerdy-dirty-postcards-nicole-martinez/ (I used the planet one this year) http://www.evilmadscientist.com/2013/valentines/ http://www.sciencefriday.com/blogs/02/12/2015/this-valentine-s-day-say-i-love-you-with-science.html?series=33 https://parttimenerdblog.wordpress.com/2013/02/12/nerdy-valentines-day-2/ But are there any good *transhumanist* valentines day cards? The closest I found was this, http://cdn.someecards.com/someecards/usercards/MjAxMy00NGM1OGE5NjM0MWRkMmYw.png ("I love you enough to listen to you talk transhumanism") which might not be entirely h+ positive, and? https://worldlyir.wordpress.com/2014/02/13/posthuman-valentines-from-worldly-ir/ which are funnier but maybe a bit creepy. There is of course https://c1.staticflickr.com/9/8141/7305555318_8c0133d352.jpg which makes sense for comic fans who know about the epic relationship between Brain and Monsieur Mallah.? But can we do better?? "You are my singularity" "I love you with all my brain" "You are the best enhancer science knows" "My love for you is growing faster than exponential" Any others? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Feb 15 00:47:57 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 14 Feb 2015 19:47:57 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sat, Feb 14, 2015 Stathis Papaioannou wrote: > What you can prove is that IF a being is conscious THEN its functional > equivalent would also be conscious. > But the only being I can prove to be conscious is myself, and unfortunately that proof is available to nobody but me. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 15 01:19:36 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 12:19:36 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sunday, February 15, 2015, John Clark wrote: > On Sat, Feb 14, 2015 Stathis Papaioannou > wrote: > > > What you can prove is that IF a being is conscious THEN its functional >> equivalent would also be conscious. >> > > But the only being I can prove to be conscious is myself, and > unfortunately that proof is available to nobody but me. > Indeed, but the statement Imade is still valid. It means you can open a brain prosthesis business with the guarantee that if you look after the technical aspects, any consciousness that was there will be preserved. Of course, if there wasn't any consciousness there to start with there won't be any afterwards either, but that is consistent with the guarantee. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Feb 15 02:23:08 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 14 Feb 2015 21:23:08 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sat, Feb 14, 2015 , Stathis Papaioannou wrote: > It means you can open a brain prosthesis business with the guarantee that > if you look after the technical aspects, any consciousness that was there > will be preserved. But if one of your customers said he was conscious before but after using your prosthesis he no longer was and demanded you honor your guarantee and give him his money back how do you know if he's telling the truth? Do you give him his money back? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sun Feb 15 02:36:40 2015 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 14 Feb 2015 21:36:40 -0500 Subject: [ExI] Driverless racecars Message-ID: http://www.telegraph.co.uk/news/science/science-news/11410261/Driverless-car-beats-racing-driver-for-first-time.html Place your bets, Spike. ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sun Feb 15 03:00:13 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 14 Feb 2015 20:00:13 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: <54E00BBD.8040306@canonizer.com> On 2/14/2015 3:40 PM, Stathis Papaioannou wrote: > On 15 Feb 2015, at 9:18 am, John Clark > wrote: > >> On Sat, Feb 14, 2015 Brent Allsop > > wrote: >> >> > Tomaz, William, and many others, it seems to me from what they >> say, still don't understand what a redness quale is or is not. >> >> >> You keep saying stuff like that but I would maintain that there is >> not a single member of this list who believes that redness and >> electromagnetic waves with a wavelength of 650 nm are the same >> thing. And they didn't need your paper to figure it out. >> >> > how can you prove to everyone, qualitatively, experimentally, >> whether it is system structure >> >> >> You can't. You can't prove anything about consciousness to a third >> party and so obsessing over it is a lot like video games, a complete >> waste of time. > > What you can prove is that IF a being is conscious THEN its functional > equivalent would also be conscious. This implies that consciousness is > a necessary side-effect of certain types of behaviour. > How is this in any way a "proof"? How would you convince, anyone but a functionalist, that this is a "proof"? Has this argument, that it is a "proof" ever converted anyone, anywhere? Even chalmers doesn't call it a "proof". In fact, if the prediction that reality is a materialist theory, much like the 3 color world described in the paper is validated by experimental science, this "conjecture" at best will clearely be proven, or at least demonstrable to everyone, that functionalism and it's so called "hard problem" is a that has been leading everyone completely astray for WAY to long, preventing everyone from making one of the most profound physical discoveries - ever. And, when you talk about "zombie glutmate", what I would like to see is what would you propose calling, whatever the equivalent term would be in some "functionalist" theoretical world? I mean "zombie functional isomorph" may cut it in philosophical circles, but it certainly would never make it in any theoretical physics certicles since it is so obviously not even definable, in physical terms, let alone detectable or testable or falsifiably. Philosophers love terms that aren't falsifiable, because nobody can falsify them. But they are also completely useless in any theoretical science. It'd be a lot easier for experimental scientists to accept what you are describing, if you could define it in better terminology, in testable, or at least falsifiable ways. Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 15 02:59:20 2015 From: spike66 at att.net (spike) Date: Sat, 14 Feb 2015 18:59:20 -0800 Subject: [ExI] Transhumanist valentines In-Reply-To: <2318667712-22999@secure.ericade.net> References: <2318667712-22999@secure.ericade.net> Message-ID: <009901d048cb$65b9c9e0$312d5da0$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Anders Sandberg ? But can we do better? "You are my singularity" "I love you with all my brain" "You are the best enhancer science knows" "My love for you is growing faster than exponential" Any others? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University My feelings for you transcends mere love. You stimulate the production of endorphins within me and cause surges of dopamine to the pleasure centers of my brain. My bride always falls for that one. In her case I mean it from the bottom of my brainstem: she really does all that. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 15 03:19:51 2015 From: spike66 at att.net (spike) Date: Sat, 14 Feb 2015 19:19:51 -0800 Subject: [ExI] Driverless racecars In-Reply-To: References: Message-ID: <00d901d048ce$43702430$ca506c90$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty Subject: [ExI] Driverless racecars http://www.telegraph.co.uk/news/science/science-news/11410261/Driverless-car-beats-racing-driver-for-first-time.html Place your bets, Spike. ;) Thanks Mike, this is definitely cool. This tech came along just in time. Even for those of us who love cars and racing, even we get bored with the sport as is. The cars have gotten faster over the years, and I don?t like seeing people crash. But I would likely buy tickets to watch the first multi-car race at the local track, especially if we get that going on the local short ovals, dirt track with sprint cars and such as that. Even more fun will be robot motorcycle racing, considering that racing bikes only weigh about 300 pounds, so if we get rid of about 140 pounds of rider and about half of the aero drag, that bike will go like all hell. I would pay money for tickets to see that. This could revive the sport. I want to see a motorcycle scooting down the track at 100 or better with nobody aboard doing this: http://02ff48c.netsolhost.com/WordPress/wp-content/uploads/2011/06/dirt-track-motorcycle-race-sideburn.jpg spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: application/octet-stream Size: 9002 bytes Desc: not available URL: From stathisp at gmail.com Sun Feb 15 03:18:36 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 14:18:36 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sunday, February 15, 2015, John Clark wrote: > On Sat, Feb 14, 2015 , Stathis Papaioannou > wrote: > > > It means you can open a brain prosthesis business with the >> guarantee that if you look after the technical aspects, any consciousness >> that was there will be preserved. > > > But if one of your customers said he was conscious before but after using > your prosthesis he no longer was and demanded you honor your guarantee and > give him his money back how do you know if he's telling the truth? Do you > give him his money back? > No, because you have a proof that if he was conscious before and he continues to behave the same way now then he is still conscious. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 15 03:53:07 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 14:53:07 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: <54E00BBD.8040306@canonizer.com> References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E00BBD.8040306@canonizer.com> Message-ID: On Sunday, February 15, 2015, Brent Allsop wrote: > > On 2/14/2015 3:40 PM, Stathis Papaioannou wrote: > > On 15 Feb 2015, at 9:18 am, John Clark > wrote: > > On Sat, Feb 14, 2015 Brent Allsop > wrote: > > > Tomaz, William, and many others, it seems to me from what they say, >> still don't understand what a redness quale is or is not. >> > > You keep saying stuff like that but I would maintain that there is not a > single member of this list who believes that redness and electromagnetic > waves with a wavelength of 650 nm are the same thing. And they didn't need > your paper to figure it out. > > > how can you prove to everyone, qualitatively, experimentally, whether >> it is system structure >> > > You can't. You can't prove anything about consciousness to a third party > and so obsessing over it is a lot like video games, a complete waste of > time. > > > What you can prove is that IF a being is conscious THEN its functional > equivalent would also be conscious. This implies that consciousness is a > necessary side-effect of certain types of behaviour. > > > How is this in any way a "proof"? How would you convince, anyone but a > functionalist, that this is a "proof"? Has this argument, that it is a > "proof" ever converted anyone, anywhere? Even chalmers doesn't call it a > "proof". In fact, if the prediction that reality is a materialist theory, > much like the 3 color world described in the paper is validated by > experimental science, this "conjecture" at best will clearely be proven, or > at least demonstrable to everyone, that functionalism and it's so called > "hard problem" is a that has been leading everyone completely astray for > WAY to long, preventing everyone from making one of the most profound > physical discoveries - ever. > > And, when you talk about "zombie glutmate", what I would like to see is > what would you propose calling, whatever the equivalent term would be in > some "functionalist" theoretical world? I mean "zombie functional > isomorph" may cut it in philosophical circles, but it certainly would never > make it in any theoretical physics certicles since it is so obviously not > even definable, in physical terms, let alone detectable or testable or > falsifiably. Philosophers love terms that aren't falsifiable, because > nobody can falsify them. But they are also completely useless in any > theoretical science. > > It'd be a lot easier for experimental scientists to accept what you are > describing, if you could define it in better terminology, in testable, or > at least falsifiable ways. > Brent, I thought you more or less agreed with functionalism (though I don't think you realised it) when you agreed recently that if a glutamate analogue were substituted for regular glutamate and the subject said that everything seemed just the same then his qualia would be just the same. This is an experiment we could actually do today, using chemically identical but isotopically different glutamate. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sun Feb 15 03:31:35 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 14 Feb 2015 20:31:35 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: <54E01317.4080105@canonizer.com> On 2/14/2015 6:19 PM, Stathis Papaioannou wrote: > > > On Sunday, February 15, 2015, John Clark > wrote: > > On Sat, Feb 14, 2015 Stathis Papaioannou > wrote: > > > What you can prove is that IF a being is conscious THEN its > functional equivalent would also be conscious. > > > But the only being I can prove to be conscious is myself, and > unfortunately that proof is available to nobody but me. > > > Indeed, but the statement Imade is still valid. It means you can open > a brain prosthesis business with the guarantee that if you look after > the technical aspects, any consciousness that was there will be > preserved. Of course, if there wasn't any consciousness there to start > with there won't be any afterwards either, but that is consistent with > the guarantee. > Anyone want to bet that you guys forgot the YET, and that it will be "proven" in less than 10 years and that there will be a near 99% of all expert consensus that it has been "proven" as powerfully as evolution, or any other such now agree on scientific "fact", as predicted science will verify in the paper? Stathis, would you not agree that the word red, has nothing to do with a redness quality, other than it has interpretation hardware somewhere interpereting it as if it was redness, or back to the real "functional isomorph" or whatever? In other words, certainly you agree that zombie informaiton is a real thing. So why could you not completely reproduce a system that can beahve in any way you desire, yet still, by definition, since it is operating on zombie information (does not have the same salty or red quale) yet as long as it has the correct interpretation hardware, it can still map or model, anything you want. Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 15 05:42:00 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 16:42:00 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: <54E01317.4080105@canonizer.com> References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> Message-ID: On Sunday, February 15, 2015, Brent Allsop wrote: > On 2/14/2015 6:19 PM, Stathis Papaioannou wrote: > > > > On Sunday, February 15, 2015, John Clark > wrote: > >> On Sat, Feb 14, 2015 Stathis Papaioannou wrote: >> >> > What you can prove is that IF a being is conscious THEN its >>> functional equivalent would also be conscious. >>> >> >> But the only being I can prove to be conscious is myself, and >> unfortunately that proof is available to nobody but me. >> > > Indeed, but the statement Imade is still valid. It means you can open > a brain prosthesis business with the guarantee that if you look after the > technical aspects, any consciousness that was there will be preserved. Of > course, if there wasn't any consciousness there to start with there won't > be any afterwards either, but that is consistent with the guarantee. > > > Anyone want to bet that you guys forgot the YET, and that it will be > "proven" in less than 10 years and that there will be a near 99% of all > expert consensus that it has been "proven" as powerfully as evolution, or > any other such now agree on scientific "fact", as predicted science will > verify in the paper? > Yes, I'd be happy to bet. How much? > Stathis, would you not agree that the word red, has nothing to do with a > redness quality, other than it has interpretation hardware somewhere > interpereting it as if it was redness, or back to the real "functional > isomorph" or whatever? In other words, certainly you agree that zombie > informaiton is a real thing. So why could you not completely reproduce a > system that can beahve in any way you desire, yet still, by definition, > since it is operating on zombie information (does not have the same salty > or red quale) yet as long as it has the correct interpretation hardware, it > can still map or model, anything you want. > Yes, in theory there could be a system that interprets redness but does not experience redness. But if the system did experience redness and a part of it was changed for a functional isomorph then it would still claim to experience redness and actually experience redness. The example I gave before was a physically different but chemically identical form of glutamate. It's an experiment that we could actually do today. What do you expect would happen? How would you interpret the results? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 15 05:46:40 2015 From: spike66 at att.net (spike) Date: Sat, 14 Feb 2015 21:46:40 -0800 Subject: [ExI] Driverless racecars In-Reply-To: <00d901d048ce$43702430$ca506c90$@att.net> References: <00d901d048ce$43702430$ca506c90$@att.net> Message-ID: <01e201d048e2$c6092040$521b60c0$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty Subject: [ExI] Driverless racecars >?Even more fun will be robot motorcycle racing, considering that racing bikes only weigh about 300 pounds, so if we get rid of about 140 pounds of rider and about half of the aero drag, that bike will go like all hell. I would pay money for tickets to see that. This could revive the sport? spike Maybe it will look a little like this: I?m not kidding, this sport would be a total hoot. A bunch of bikes out there with nobody aboard, and you could afford to take crazy risks with them. A dirt tracker doesn?t really need a high-tech expensive engine. That sport is all about rider skill. A bike costing 5k is good enough to compete at the state level. Hospital bills are expensive and human bodies are irreplaceable, but we could ride these robo-bikes right on the edge. Right the riders are so good, the optimal strategies have converged, they all look alike, or at least all the top riders do. But with robo-bikes, all the engineering still needs to develop, and the best strategies are not known. We don?t even know the optimal lines on the track if we get rid of a third of the weight and nearly half the aerodrag. The gearing would need to change for higher top speeds and to take advantage of higher accelerations available. Now instead of individuals, you would have university teams doing the software, big sponsors and clever privateers all racing together. The usual jocks along with professors, software companies (you know Google will want in on this) all the usual suspects along with wimpy geeks who never rode a motorcycle, all could play. This will be a mechanical engineer?s playground. I have gotten to where I think about the ethics of motorcycle racing. I know I am paying guys to do risky sports and ja it does give me pause. Football: we pay guys to wreck their brains by using their damn heads as battering rams. Well I don?t buy football tickets, and aren?t I just such a righteous sort. But I have been known to go to the local motorcycle races, and guys do wipe out sometimes, so that is the same principle, ja? Maybe worse. But I could watch robo-races with a clean conscience and have a hell of a good time. You know this stuff is coming soon. Oh my the next decade will be a fun time to be alive. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: application/octet-stream Size: 41960 bytes Desc: not available URL: From rafal.smigrodzki at gmail.com Sun Feb 15 07:23:10 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 15 Feb 2015 02:23:10 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> Message-ID: On Sun, Feb 15, 2015 at 12:42 AM, Stathis Papaioannou wrote: > > > Yes, in theory there could be a system that interprets redness but does > not experience redness. But if the system did experience redness and a part > of it was changed for a functional isomorph then it would still claim to > experience redness and actually experience redness. The example I gave > before was a physically different but chemically identical form of > glutamate. It's an experiment that we could actually do today. What do you > expect would happen? How would you interpret the results? > ### The problem with suggesting that qualia are determined by the exact physical structure of the entity experiencing them, rather than functional isomorphism, is that you can't justifiably stop at some point on the scale of "exact". You suggested a thought experiment with substituting glutamate but one can go on: What if qualia perceived using yesterday's brain (under slightly different gravitational, electromagnetic and chemical influences) are substantially different from today's? Of course, since our memories of yesterday's qualia are retrieved using today's brain, we wouldn't know. All of us, both conscious and philosophically zombiefied, would make the same mouth noises. What if the movement of Jupiter, well-known as the bringer of jollity, makes our qualia surreptitiously dance a merry gig? This said, I don't think that functional isomorphism can be defined strictly behaviorally - you need to also observe the internal processing of information in the system, and not just its interactions with the environment to define function. It remains a question whether, if you use a different processing algorithm to compute the same properties of experienced object, you may have qualitatively different experiences. Reflectances (i.e. color) in a visual input can be calculated by a cortex or a robotic visual system, and could trigger the same behavior - correctly naming colors in pictures. Would the corresponding qualia be different? My guess is yes, they would, much like the smell and the look of a skunk can trigger the same verbal output but do differ enormously on the subjective level. Once we are able to connect a robotic color discriminator directly to your brain, while keeping the old visual cortex around, we will be able to confirm that the qualia differ, although in what particular way would remain most likely ineffable. Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 15 09:24:44 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 20:24:44 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> Message-ID: On Sunday, February 15, 2015, Rafal Smigrodzki wrote: > > > On Sun, Feb 15, 2015 at 12:42 AM, Stathis Papaioannou > wrote: >> >> >> Yes, in theory there could be a system that interprets redness but does >> not experience redness. But if the system did experience redness and a part >> of it was changed for a functional isomorph then it would still claim to >> experience redness and actually experience redness. The example I gave >> before was a physically different but chemically identical form of >> glutamate. It's an experiment that we could actually do today. What do you >> expect would happen? How would you interpret the results? >> > > ### The problem with suggesting that qualia are determined by the exact > physical structure of the entity experiencing them, rather than functional > isomorphism, is that you can't justifiably stop at some point on the scale > of "exact". You suggested a thought experiment with substituting glutamate > but one can go on: What if qualia perceived using yesterday's brain (under > slightly different gravitational, electromagnetic and chemical influences) > are substantially different from today's? Of course, since our memories of > yesterday's qualia are retrieved using today's brain, we wouldn't know. All > of us, both conscious and philosophically zombiefied, would make the same > mouth noises. What if the movement of Jupiter, well-known as the bringer of > jollity, makes our qualia surreptitiously dance a merry gig? > > This said, I don't think that functional isomorphism can be defined > strictly behaviorally - you need to also observe the internal processing of > information in the system, and not just its interactions with the > environment to define function. > > It remains a question whether, if you use a different processing algorithm > to compute the same properties of experienced object, you may have > qualitatively different experiences. > The idea is not to copy behaviour as such but to model the components. An engineer can take a component from a machine, put it through a series of tests, and make a replacement component from perhaps completely different parts that, if done properly, should work just like the original when installed - even if the exact way the machine works is unknown. > Reflectances (i.e. color) in a visual input can be calculated by a cortex > or a robotic visual system, and could trigger the same behavior - correctly > naming colors in pictures. Would the corresponding qualia be different? My > guess is yes, they would, much like the smell and the look of a skunk can > trigger the same verbal output but do differ enormously on the subjective > level. > > Once we are able to connect a robotic color discriminator directly to your > brain, while keeping the old visual cortex around, we will be able to > confirm that the qualia differ, although in what particular way would > remain most likely ineffable. > > Rafa? > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Sun Feb 15 09:26:42 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 15 Feb 2015 10:26:42 +0100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> Message-ID: I am very sure, that you can have a Turing machine in any form you want. >From Lego, to electronics, to old steam locomotive on a long rails picking up stones and laying them down in a Turing machine fashion ... as long as this machine performs a good enough simulation of my current brain, I will feel as I am writing this. On Sun, Feb 15, 2015 at 8:23 AM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > > > On Sun, Feb 15, 2015 at 12:42 AM, Stathis Papaioannou > wrote: >> >> >> Yes, in theory there could be a system that interprets redness but does >> not experience redness. But if the system did experience redness and a part >> of it was changed for a functional isomorph then it would still claim to >> experience redness and actually experience redness. The example I gave >> before was a physically different but chemically identical form of >> glutamate. It's an experiment that we could actually do today. What do you >> expect would happen? How would you interpret the results? >> > > ### The problem with suggesting that qualia are determined by the exact > physical structure of the entity experiencing them, rather than functional > isomorphism, is that you can't justifiably stop at some point on the scale > of "exact". You suggested a thought experiment with substituting glutamate > but one can go on: What if qualia perceived using yesterday's brain (under > slightly different gravitational, electromagnetic and chemical influences) > are substantially different from today's? Of course, since our memories of > yesterday's qualia are retrieved using today's brain, we wouldn't know. All > of us, both conscious and philosophically zombiefied, would make the same > mouth noises. What if the movement of Jupiter, well-known as the bringer of > jollity, makes our qualia surreptitiously dance a merry gig? > > This said, I don't think that functional isomorphism can be defined > strictly behaviorally - you need to also observe the internal processing of > information in the system, and not just its interactions with the > environment to define function. > > It remains a question whether, if you use a different processing algorithm > to compute the same properties of experienced object, you may have > qualitatively different experiences. > > Reflectances (i.e. color) in a visual input can be calculated by a cortex > or a robotic visual system, and could trigger the same behavior - correctly > naming colors in pictures. Would the corresponding qualia be different? My > guess is yes, they would, much like the smell and the look of a skunk can > trigger the same verbal output but do differ enormously on the subjective > level. > > Once we are able to connect a robotic color discriminator directly to your > brain, while keeping the old visual cortex around, we will be able to > confirm that the qualia differ, although in what particular way would > remain most likely ineffable. > > Rafa? > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 15 09:55:52 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 20:55:52 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> Message-ID: <9506E8D1-D6FD-4DCC-9975-581DEB177868@gmail.com> > On 15 Feb 2015, at 8:26 pm, Tomaz Kristan wrote: > > I am very sure, that you can have a Turing machine in any form you want. From Lego, to electronics, to old steam locomotive on a long rails picking up stones and laying them down in a Turing machine fashion ... as long as this machine performs a good enough simulation of my current brain, I will feel as I am writing this. I agree, but it has to be carefully justified. It is possible that there is physics in your brain that is not Turing emulable. It is also on the face of it possible that your consciousness is due to biological processes in your brain and that a computer simulation may behave like you but won't have your consciousness. On further analysis, this turns out to be false. From protokol2020 at gmail.com Sun Feb 15 10:49:00 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 15 Feb 2015 11:49:00 +0100 Subject: [ExI] Zombie glutamate In-Reply-To: <9506E8D1-D6FD-4DCC-9975-581DEB177868@gmail.com> References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> <9506E8D1-D6FD-4DCC-9975-581DEB177868@gmail.com> Message-ID: Stathis: > in your brain that is not Turing emulable. It's a small chance, that in fact it is so! That something more is going on. Not only inside our heads, but elsewhere as well. I bet, however, on the other option. That nothing inside this world is Trans-Turing. Maybe more efficiently done, but not essentially different. This is my current best bet. Waiting for the Minerva's owl to fly. On Sun, Feb 15, 2015 at 10:55 AM, Stathis Papaioannou wrote: > > > > > On 15 Feb 2015, at 8:26 pm, Tomaz Kristan > wrote: > > > > I am very sure, that you can have a Turing machine in any form you want. > From Lego, to electronics, to old steam locomotive on a long rails picking > up stones and laying them down in a Turing machine fashion ... as long as > this machine performs a good enough simulation of my current brain, I will > feel as I am writing this. > > I agree, but it has to be carefully justified. It is possible that there > is physics in your brain that is not Turing emulable. It is also on the > face of it possible that your consciousness is due to biological processes > in your brain and that a computer simulation may behave like you but won't > have your consciousness. On further analysis, this turns out to be false. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Sun Feb 15 11:17:45 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 15 Feb 2015 12:17:45 +0100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> <9506E8D1-D6FD-4DCC-9975-581DEB177868@gmail.com> Message-ID: So, my precious consciousness (with no irony here) can likely be almost too easily emulated. Maybe inside a computer not more complex than this PC. Maybe a bigger computer _is_ required, I don't know. Maybe a very simple one, like the Monte, the no-so-smart phone, I had. Maybe even simpler? Might be. Might be also MUCH more complicated than today's largest computer or than the Internet. It's a possibility. But at the end, it's "just" a Turing machine model. And it can apparently causes the consciousness I know. On Sun, Feb 15, 2015 at 11:49 AM, Tomaz Kristan wrote: > Stathis: > > > in your brain that is not Turing emulable. > > It's a small chance, that in fact it is so! That something more is going > on. Not only inside our heads, but elsewhere as well. > > I bet, however, on the other option. That nothing inside this world is > Trans-Turing. Maybe more efficiently done, but not essentially different. > > This is my current best bet. Waiting for the Minerva's owl to fly. > > > > > On Sun, Feb 15, 2015 at 10:55 AM, Stathis Papaioannou > wrote: > >> >> >> >> > On 15 Feb 2015, at 8:26 pm, Tomaz Kristan >> wrote: >> > >> > I am very sure, that you can have a Turing machine in any form you >> want. From Lego, to electronics, to old steam locomotive on a long rails >> picking up stones and laying them down in a Turing machine fashion ... as >> long as this machine performs a good enough simulation of my current brain, >> I will feel as I am writing this. >> >> I agree, but it has to be carefully justified. It is possible that there >> is physics in your brain that is not Turing emulable. It is also on the >> face of it possible that your consciousness is due to biological processes >> in your brain and that a computer simulation may behave like you but won't >> have your consciousness. On further analysis, this turns out to be false. >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > https://protokol2020.wordpress.com/ > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 15 11:47:13 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 15 Feb 2015 22:47:13 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> <9506E8D1-D6FD-4DCC-9975-581DEB177868@gmail.com> Message-ID: On 15 February 2015 at 22:17, Tomaz Kristan wrote: > So, my precious consciousness (with no irony here) can likely be almost too > easily emulated. > > Maybe inside a computer not more complex than this PC. Maybe a bigger > computer _is_ required, I don't know. Maybe a very simple one, like the > Monte, the no-so-smart phone, I had. > > Maybe even simpler? Might be. Might be also MUCH more complicated than > today's largest computer or than the Internet. It's a possibility. You don't need a big and complex computer to simulate anything, whether the brain or the universe containing it, you just need a general purpose computer (a smart phone will do) and enough storage. What you need high powered hardware for is if you want to emulate a brain in real time. > But at the end, it's "just" a Turing machine model. And it can apparently > causes the consciousness I know. -- Stathis Papaioannou From protokol2020 at gmail.com Sun Feb 15 12:02:10 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 15 Feb 2015 13:02:10 +0100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> <9506E8D1-D6FD-4DCC-9975-581DEB177868@gmail.com> Message-ID: Yes. This "enough storage" is the key. And very likely, a MB would suffice. The time component - how fast emulation, relatively to the current one we can achieve with a today's PC? I guess at least 1/1000 we should be able to. Maybe even more. It's another matter if we are smart enough to perform these emulations in a short time span. But eventually, it's pretty much inescapable road to go. No "irreducible complexity" or "spirituality" spells can prevent it. On Sun, Feb 15, 2015 at 12:47 PM, Stathis Papaioannou wrote: > On 15 February 2015 at 22:17, Tomaz Kristan > wrote: > > So, my precious consciousness (with no irony here) can likely be almost > too > > easily emulated. > > > > Maybe inside a computer not more complex than this PC. Maybe a bigger > > computer _is_ required, I don't know. Maybe a very simple one, like the > > Monte, the no-so-smart phone, I had. > > > > Maybe even simpler? Might be. Might be also MUCH more complicated than > > today's largest computer or than the Internet. It's a possibility. > > You don't need a big and complex computer to simulate anything, > whether the brain or the universe containing it, you just need a > general purpose computer (a smart phone will do) and enough storage. > What you need high powered hardware for is if you want to emulate a > brain in real time. > > > But at the end, it's "just" a Turing machine model. And it can apparently > > causes the consciousness I know. > > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Sun Feb 15 12:29:14 2015 From: anders at aleph.se (Anders Sandberg) Date: Sun, 15 Feb 2015 13:29:14 +0100 Subject: [ExI] Driverless racecars In-Reply-To: <00d901d048ce$43702430$ca506c90$@att.net> Message-ID: <2368962779-32389@secure.ericade.net> And there is also Formula E for electric racing cars. According to my contacts at an insurance company sponsoring one of the teams (Amlin Aguri), the real challenge is that you need to recharge batteries/switch car strategically: it is not possible to run a full race on one charge. This apparently adds an element of probabilistic strategy, and some insurance guys are apparently modelling it. Making the cars driverless seems to be the next step. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From test at ssec.wisc.edu Sun Feb 15 14:54:18 2015 From: test at ssec.wisc.edu (Bill Hibbard) Date: Sun, 15 Feb 2015 08:54:18 -0600 (CST) Subject: [ExI] Transhumanist valentines Message-ID: > Any others? Say it with blue roses: http://en.wikipedia.org/wiki/Blue_rose#Genetically_engineered_roses From sparge at gmail.com Sun Feb 15 15:40:55 2015 From: sparge at gmail.com (Dave Sill) Date: Sun, 15 Feb 2015 10:40:55 -0500 Subject: [ExI] Driverless racecars In-Reply-To: <2368962779-32389@secure.ericade.net> References: <00d901d048ce$43702430$ca506c90$@att.net> <2368962779-32389@secure.ericade.net> Message-ID: On Sun, Feb 15, 2015 at 7:29 AM, Anders Sandberg wrote: > Making the cars driverless seems to be the next step. No doubt that idea has appeal, but the fundamental draw of athletics is the performance of the athletes. What can the best humans on the planet do? Sure, I'd like to see a driverless race. As in, one driverless race. It's a novelty, not something I'd watch weekly. Robot golf, tennis, baseball, etc. would all fall in that category for me. A robot that could shoot foul shots (or three-pointers) with five-9's accuracy would be the most boring thing imaginable--except perhaps for the team that created it. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Feb 15 15:57:58 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 15 Feb 2015 10:57:58 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sat, Feb 14, 2015 at 10:18 PM, Stathis Papaioannou wrote: > >>> It means you can open a brain prosthesis business with the >>> guarantee that if you look after the technical aspects, any consciousness >>> that was there will be preserved. >> >> >> >> But if one of your customers said he was conscious before but after >> using your prosthesis he no longer was and demanded you honor your >> guarantee and give him his money back how do you know if he's telling the >> truth? Do you give him his money back? >> > > > No, because you have a proof that if he was conscious before > How can you have a proof of that? > > and he continues to behave the same way now then he is still conscious. > He continues to behave intelligently and if he is still conscious (but is he?) that would be consistent with the theory that consciousness is the inevitable by product of intelligence. I am certain this theory is true, and I'm probably correct too, and there is considerable evidence in it's favor; but is there proof? No and there never will be. And there are other theories that are also consistent with everything known about consciousness, for example the theory that 2 and only 2 attributes are needed to be conscious, being male and being 6 feet 2 inches tall. The theory that conscious is generated by the left big toe and those lacking such a toe are not conscious is also consistent with all the evidence. You may say these theories are silly and they are, but like all consciousness theories they can't be proven wrong. The consciousness-intelligence link theory can't be proven wrong either but it has something the others do not, it is the exact same rule of thumb that every single one of us has used every single day of our lives to determine if something is conscious or not. And I can think of no reason to abandon this very useful rule of thumb just because computers are starting to behave intelligently. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 15 16:09:10 2015 From: spike66 at att.net (spike) Date: Sun, 15 Feb 2015 08:09:10 -0800 Subject: [ExI] Driverless racecars In-Reply-To: <2368962779-32389@secure.ericade.net> References: <00d901d048ce$43702430$ca506c90$@att.net> <2368962779-32389@secure.ericade.net> Message-ID: <013a01d04939$bcaa3820$35fea860$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Anders Sandberg Sent: Sunday, February 15, 2015 4:29 AM To: ExI chat list Subject: Re: [ExI] Driverless racecars >?And there is also Formula E for electric racing cars. According to my contacts at an insurance company sponsoring one of the teams (Amlin Aguri), the real challenge is that you need to recharge batteries/switch car strategically: it is not possible to run a full race on one charge. This apparently adds an element of probabilistic strategy, and some insurance guys are apparently modelling it. Making the cars driverless seems to be the next step. Anders Sandberg? The control algorithms for electric cars are easier than with IC engines, but I still look forward to seeing the explosive power of gasoline coupled with state-of-the-art control theory. I keep bringing up motorcycles for a reason. With cars, we are already at the materials limits, so there is no good reason to imagine driverless cars will go much faster than good humans. But with the bikes, swapping even a light rider for 10 Kg of actuators tucked low out of the wind will allow us to make bikes that go faster than any human rider. To press this a little further, consider the sport of stadium motocross https://www.youtube.com/watch?v=54zRL1z7NSk Beyond the stunning visuals of those bikes doing their thing with nobody aboard, consider the bikes only weigh 200 pounds, and the lightest riders are over 100, so we lose over a third of the weight. Without a rider we might even be able to lose some of the bike itself: we no longer need a seat for instance, or fenders, or handlebars, and perhaps we could even redesign the suspension for lowered weight. It becomes an entirely new game to design the fastest machine around that track. Parting shot: bikes like these can be had new for 5k, and you really wouldn?t hurt the essential stuff if you crash. I attended the original DARPA challenge in 2004 for driverless cars. No one finished. In that demonstration there was one bike. We see how far we have come in 11 years with the cars. I can imagine the next ten years being some of the most fun in racing in our lifetimes. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Sun Feb 15 16:24:57 2015 From: anders at aleph.se (Anders Sandberg) Date: Sun, 15 Feb 2015 17:24:57 +0100 Subject: [ExI] Driverless racecars In-Reply-To: Message-ID: <2383061459-14801@secure.ericade.net> Dave Sill , 15/2/2015 4:43 PM: On Sun, Feb 15, 2015 at 7:29 AM, Anders Sandberg wrote: Making the cars driverless seems to be the next step. No doubt that idea has appeal, but the fundamental draw of athletics is the performance of the athletes. What can the best humans on the planet do? I remember debating human enhancement in sport a few years ago, when Soren Holm made roughly the same point: we watch sport because we like to see humans do awesome things. He was sceptical that we would enjoy seeing technological systems compete. Then I brought up monster truck rallys.? I think what we want to see is *someone* excelling, but not necessarily in the form of a direct brain-muscle-motion connection: we are happy if it is remote as long as there are ingenuity and creativity going on. The Kasparov-Deep Blue chess match was interesting even though one participant was mindless, and Kasparov did think one move was truly creative. Maybe a sufficiently complex resource allocation and drive system might be interesting to watch too - especially since we can change the rules for Formula Something to make it interesting. After all, Formula One changes rules and technical limitations constantly to keep it watchable. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Sun Feb 15 16:33:19 2015 From: anders at aleph.se (Anders Sandberg) Date: Sun, 15 Feb 2015 17:33:19 +0100 Subject: [ExI] Transhumanist valentines In-Reply-To: Message-ID: <2383477266-14801@secure.ericade.net> Bill Hibbard , 15/2/2015 4:06 PM: > Any others? Say it with blue roses: http://en.wikipedia.org/wiki/Blue_rose#Genetically_engineered_roses Neat idea! Somewhat related, and way more personal, is to give flowers with your own DNA: http://www.ekac.org/nat.hist.enig.html "The new flower is a Petunia strain that I invented and produced through molecular biology. It is not found in nature.? The Edunia has red veins on light pink petals and a gene of mine is expressed on every cell of its red veins, i.e., my gene produces a protein in the veins only [2]. The gene was isolated and sequenced from my blood. The petal pink background, against which the red veins are seen, is evocative of my own pinkish white skin tone. The result of this molecular manipulation is a bloom that creates the living image of human blood rushing through the veins of a flower." You can say:? Petunias are red, Roses are blue, Thaumatin?is sweet, And so are you.Here is a flower, I made it myself. I really ought to do that. And of course, one can add love poetry to the genome.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Sun Feb 15 20:23:56 2015 From: sparge at gmail.com (Dave Sill) Date: Sun, 15 Feb 2015 15:23:56 -0500 Subject: [ExI] Driverless racecars In-Reply-To: <2383061459-14801@secure.ericade.net> References: <2383061459-14801@secure.ericade.net> Message-ID: On Sun, Feb 15, 2015 at 11:24 AM, Anders Sandberg wrote: > Dave Sill , 15/2/2015 4:43 PM: > > > No doubt that idea has appeal, but the fundamental draw of athletics is > the performance of the athletes. What can the best humans on the planet do? > > > I remember debating human enhancement in sport a few years ago, when Soren > Holm made roughly the same point: we watch sport because we like to see > humans do awesome things. He was sceptical that we would enjoy seeing > technological systems compete. Then I brought up monster truck rallys. > Oh, are you a fan? I'm not. I could, conceivably, enjoy *one* of these just for the novelty of the experience. But tuning in regularly, becoming a fan of one particular truck/team and rooting for it, tracking developments, etc... Nope. And these trucks have human drivers. I think what we want to see is *someone* excelling, but not necessarily in > the form of a direct brain-muscle-motion connection: we are happy if it is > remote as long as there are ingenuity and creativity going on. > I'm not saying that these exhibitions of robot skill are uninteresting or not entertaining. I just think they're more in the novelty category: one-off stunts that interest people because they've never been done before. Watson on Jeopardy and Kasparov-Deep Blue were great, but nobody is rushing to create regular events based on them because we already know who's going to win. And computer vs. computer competitions haven't demonstrated any mass appeal. Ever seen the World Computer Chess Championship on a major network? > The Kasparov-Deep Blue chess match was interesting even though one > participant was mindless, and Kasparov did think one move was truly > creative. > Proof that you can be a genius at one thing while ignorant about something else. Maybe a sufficiently complex resource allocation and drive system might be > interesting to watch too - especially since we can change the rules for > Formula Something to make it interesting. After all, Formula One changes > rules and technical limitations constantly to keep it watchable. > Formula One has plenty of human elements and the rules changes and technical limitations only detract from it, IMO. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 15 21:29:22 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 16 Feb 2015 08:29:22 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Monday, February 16, 2015, John Clark wrote: > > On Sat, Feb 14, 2015 at 10:18 PM, Stathis Papaioannou > wrote: > >> > >>> It means you can open a brain prosthesis business with the >>>> guarantee that if you look after the technical aspects, any consciousness >>>> that was there will be preserved. >>> >>> >>> >> But if one of your customers said he was conscious before but after >>> using your prosthesis he no longer was and demanded you honor your >>> guarantee and give him his money back how do you know if he's telling the >>> truth? Do you give him his money back? >>> >> >> > No, because you have a proof that if he was conscious before >> > > How can you have a proof of that? > You can prove it conditionally, without knowing whether he as conscious before or not. > and he continues to behave the same way now then he is still conscious. >> > > He continues to behave intelligently and if he is still conscious (but is > he?) that would be consistent with the theory that consciousness is the > inevitable by product of intelligence. I am certain this theory is true, > and I'm probably correct too, and there is considerable evidence in it's > favor; but is there proof? No and there never will be. > Yes, there is a proof that IF a being is conscious THEN it is necessarily conscious. It doesn't help you to know if something is conscious, but it does give you confidence that appropriately implemented brain implants or mind uploading will preserve what consciousness was there. > And there are other theories that are also consistent with everything > known about consciousness, for example the theory that 2 and only 2 > attributes are needed to be conscious, being male and being 6 feet 2 inches > tall. The theory that conscious is generated by the left big toe and those > lacking such a toe are not conscious is also consistent with all the > evidence. You may say these theories are silly and they are, but like all > consciousness theories they can't be proven wrong. The > consciousness-intelligence link theory can't be proven wrong either but it > has something the others do not, it is the exact same rule of thumb that > every single one of us has used every single day of our lives to determine > if something is conscious or not. And I can think of no reason to abandon > this very useful rule of thumb just because computers are starting to > behave intelligently. > > John K Clark > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 16 06:45:49 2015 From: spike66 at att.net (spike) Date: Sun, 15 Feb 2015 22:45:49 -0800 Subject: [ExI] Driverless racecars In-Reply-To: References: <2383061459-14801@secure.ericade.net> Message-ID: <013001d049b4$34305cb0$9c911610$@att.net> Dave Sill , 15/2/2015 4:43 PM: >?No doubt that idea has appeal, but the fundamental draw of athletics is the performance of the athletes. ?-Dave I can?t get this out of my mind. Driverless cars are have had and are having a huge impact on our thinking. I see the Google car tooling around several times a year now. I saw it a couple months ago over by the Apple mothership. But the visual image of a motorcycle going by itself will be so striking it will change hearts and minds. It will completely change what we think a computer can do. Soon we will have open market self-driving cars, but even those of us who anticipate these things will not trust them. I won?t: I will be a Nervous Nellie the whole time the thing is driving. But consider once they get good enough you have a stadium motocross with a pack of guys chasing one robo-bike and can?t catch it. That entire stadium will be transformed. I can?t help thinking there must be a way to make a buttload of money off of this. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Feb 16 10:55:34 2015 From: pharos at gmail.com (BillK) Date: Mon, 16 Feb 2015 10:55:34 +0000 Subject: [ExI] META: SOFTWARE: MS OneNote 2013 now completely freeware Message-ID: Microsoft reports February 13, 2015 : We're removing all feature restrictions from OneNote 2013. Starting today you'll be able to access the full power of OneNote on your PC, including these features previously reserved for paid editions: Password protected sections - Now, you can password protect a section. This helps you to add to a particular section to secure your sensitive information. Page history - We make many changes to a single page or notes. This feature allows you to see the history of changes you have made. Audio and Video recording - OneNote 2013 now allows you to take notes through audio and video recording for free. Audio search - Now, you can search for a particular word which is present in voice or video recording. This is really interesting! Embedded files - You can insert office and other documents directly in to your notebook for free. The free edition of OneNote stores your notes on OneDrive for easy access across all your devices and works whether you're online or offline. With your free Microsoft account, you'll get 15 GB of OneDrive space for free and no limits on the number of notes you can create or sync. OneNote 2013 runs on Windows 7 and Windows 8 and is available for free from --------------- BillK From johnkclark at gmail.com Mon Feb 16 15:21:35 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 16 Feb 2015 10:21:35 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sun, Feb 15, 2015 at 4:29 PM, Stathis Papaioannou wrote: > > there is a proof that IF a being is conscious > But you can't prove that any being is conscious. > > THEN it is necessarily conscious. > Since you can't prove what causes consciousness you can't prove that moving the mind to a different substrate, like from biology to electronics, won't effect or destroy the consciousness. However having said that I must say that absolute proof is very rarely available for real world problems, there is nearly always some uncertainty but that doesn't prevent us from acting. And I think the evidence for the intelligence-consciousness link is so strong (although falling short of a proof) that I wouldn't worry a bit about being uploaded into a computer. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Feb 16 16:09:55 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 16 Feb 2015 11:09:55 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: On Sat, Feb 14, 2015 at 4:44 PM, wrote: > While Brent is not completely wrong because susbtrates do have very > specific structures that enable their function, the structural > considerations outweigh the simple identity of the substrate. For example > a hemoglobin molecule denatured by heat would still chemically be > hemoglobin, but it will have lost its delicate folded structure and > thereby all of its biological function. > Denatured hemoglobin chemically reacts very differently than non-denatured hemoglobin does, and the logical structure of a brain fed by denatured hemoglobin would be quite different from your brain, the neurons would respond to signals differently because they were dead, killed by lack of oxygen. But if done competently they logical schematic of your uploaded brain in a electronic computer would be identical to the logical schematic of your biological brain. > > If you want to simulate the mind, you would have to > simulate the human brain from the atoms up along with any attendant > chemistry and physics. You might even have to simulate the rest of the > body as well, after all, I wouldn't feel quite like myself without my > adrenal glands or my testicles subtly influencing my thinking. I see nothing sacred in hormones, I don't see the slightest reason why they or any neurotransmitter would be especially difficult to simulate through computation, because chemical messengers are not a sign of sophisticated design on nature's part, rather it's an example of Evolution's bungling. If you need to inhibit a nearby neuron there are better ways of sending that signal then launching a GABA molecule like a message in a bottle thrown into the sea and waiting ages for it to diffuse to its random target. I'm not interested in chemicals only the information they contain, I want the information to get transmitted from cell to cell by the best method and few would send smoke signals if they had a fiber optic cable. The information content in each molecular message must be tiny, just a few bits because only about 60 neurotransmitters such as acetylcholine, norepinephrine and GABA are known, even if the true number is 100 times greater (or a million times for that matter) the information content of each signal must be tiny. Also, for the long range stuff, exactly which neuron receives the signal can not be specified because it relies on a random process, diffusion. The fact that it's slow as molasses in February does not add to its charm. If your job is delivering packages and all the packages are very small and your boss doesn't care who you give them to as long as it's on the correct continent and you have until the next ice age to get the work done, then you don't have a very difficult profession. I see no reason why simulating that anachronism would present the slightest difficulty. Artificial neurons could be made to release neurotransmitters as inefficiently as natural ones if anybody really wanted to, but it would be pointless when there are much faster ways. Electronics is inherently fast because its electrical signals are sent by fast light electrons. The brain also uses some electrical signals, but it doesn't use electrons, it uses ions to send signals, the most important are chlorine and potassium. A chlorine ion is 65 thousand times as heavy as an electron, a potassium ion is even heavier, if you want to talk about gap junctions, the ions they use are millions of times more massive than electrons. There is no way to get around it, according to the fundamental laws of physics, something that has a large mass will be slow, very, very, slow. The great strength biology has over present day electronics is in the ability of one neuron to make thousands of connections of various strengths with other neurons. However, I see absolutely nothing in the fundamental laws of physics that prevents nano machines from doing the same thing, or better and MUCH faster. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Mon Feb 16 16:28:04 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Mon, 16 Feb 2015 09:28:04 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: Hi Stathis, What do you think the chances are, that we are still simply just miss communicating? We both think we understand the other, but I bet one of us is more mistaken in this belief, than the other. I desperately want to better understand the way you think, and fear I am still missing something important. Let me state some of my understanding about the way you think about the qualitative nature of consciousness. You understand what ?zombie glutmate? is, but you think such is logically, or mathematically provably, not possible? You think there is no solution to the ?hard problem? or that it is unapproachable via science. This includes your belief that we will never be able to determine in any way, any kind of diversity of phenomenal consciousness (i.e. including things like being able to detect simple red green qualitative inversion, let alone the difference between more significant types of diversity like bi chromate vs tri chromate, tetrachromats?). It is impossible for you to imagine how anyone could ever experience a new blue they have never experienced before. You prefer talking about zombie functional isomorphs, or think zombie functional isomorphs are more consistent with your thinking, as you think such is more possible than zombie glutamate? Do you think it would be possible to build a system, purely out of zombie information (i.e. by definition, has not qualitative properties/consciousness), that could pass a Turing test? Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 16 16:49:32 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 17 Feb 2015 03:49:32 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Tuesday, February 17, 2015, John Clark > wrote: > On Sun, Feb 15, 2015 at 4:29 PM, Stathis Papaioannou > wrote: > >> > > there is a proof that IF a being is conscious >> > > But you can't prove that any being is conscious. > That's right, but it doesn't follow that you can't prove anything else conditionally. > > THEN it is necessarily conscious. >> > > Since you can't prove what causes consciousness you can't prove that > moving the mind to a different substrate, like from biology to electronics, > won't effect or destroy the consciousness. > Yes, you CAN prove just that, with the only assumption being that consciousness, if it exists, is due to physical processes in your brain. As I've tried to explain several times, the argument is a reductio ad absurdum. If we say that a replacement part functions perfectly according to every test but lacks the ability to sustain consciousness, that would allow us to make you completely blind but you would behave normally, according to any test we apply to you, and either honestly believe that you had normal vision or, if you realised you were blind, would be unable to control your vocal cords as your mouth declared that everything was normal. If the former, consciousness is meaningless; if the latter, the assumption that you think with your brain is contradicted. > However having said that I must say that absolute proof is very rarely > available for real world problems, there is nearly always some uncertainty > but that doesn't prevent us from acting. And I think the evidence for the > intelligence-consciousness link is so strong (although falling short of a > proof) that I wouldn't worry a bit about being uploaded into a computer. > But you can prove it, despite being unable to prove that any particular being is conscious. This is what I have been saying for several posts. What is needed is that the mechanicalgyy -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 16 17:08:23 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 17 Feb 2015 04:08:23 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Tuesday, February 17, 2015, Brent Allsop wrote: > > > Hi Stathis, > > > > What do you think the chances are, that we are still simply just miss > communicating? We both think we understand the other, but I bet one of > us is more mistaken in this belief, than the other. I desperately want > to better understand the way you think, and fear I am still missing > something important. Let me state some of my understanding about the way > you think about the qualitative nature of consciousness. > > I'm sure we are somehow miscommunicating, and it is frustrating. > You understand what ?zombie glutmate? is, but you think such is logically, > or mathematically provably, not possible? > > Zombie glutamate is glutamate that functions normally in its role as a neurotransmitter according to any test, except it does not contribute to the aspect of consciousness that natural glutamate does (say red qualia). I think this is absurd: if it functions normally, then it must also contribute to consciousness normally. I don't think it is possible even by miraculous means to create zombie glutamate. > You think there is no solution to the ?hard problem? or that it is > unapproachable via science. This includes your belief that we will never > be able to determine in any way, any kind of diversity of phenomenal > consciousness (i.e. including things like being able to detect simple red > green qualitative inversion, let alone the difference between more > significant types of diversity like bi chromate vs tri chromate, > tetrachromats?). > > I think the philosophical "hard problem" of consciousness is almost by definition impossible to solve. However, I think we can go a long way towards solving the "easy problem", even to the point of being able scan someone's brain and deduce what they are thinking. > It is impossible for you to imagine how anyone could ever experience a new > blue they have never experienced before. > > No, I can imagine that. > You prefer talking about zombie functional isomorphs, or think zombie > functional isomorphs are more consistent with your thinking, as you think > such is more possible than zombie glutamate? > > No, I don't think zombie functional isomorphs are possible. I use the term in order to dismiss it. > Do you think it would be possible to build a system, purely out of zombie > information (i.e. by definition, has not qualitative > properties/consciousness), that could pass a Turing test? > > Yes. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Mon Feb 16 18:19:05 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Mon, 16 Feb 2015 11:19:05 -0700 Subject: [ExI] zombie physics / detecting qualia paper Message-ID: Folks, Thanks all, immensely, for all the help on the Detecting Qualia paper, especially the suggestions and professional level editing. Over the weekend, we added the 4 graphical figures portraying the quale interpretation problem, suffered by Gallant, in his effort to make movies by reading our minds. We are now working on a section where we acknowledge all these contributions by you all. Could everyone who has contributed please go in to this section, and add their name, as they'd like to be recognized for contributing? Even if you made a small edit, I'd like to include you. https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit Thanks Again, Brent Allsop -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Mon Feb 16 18:36:31 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Mon, 16 Feb 2015 11:36:31 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: Bill, I bet people once asked Galileo, why he was so interested in whether the sun went around the earth, or the other way around. He was probably constantly asked: Even if you knew such, what would you do with that information? It's all about knowing what the future will be like, and being able to make preparations for, and decisions, based on good information. I'd bet you are a tetrachromat, and represent your visual knowledge, like most of us, with 3 primary colors. Do you have any interest in knowing, qualitatively, what bichromat, and even more phenomenal, tetrachromat visual information is qualitatively like? Do you care if someone might have drastically different qualitative knowledge than you....? Are you interested in knowing, qualitatively, what it is like to be a bat? In fact, is zombie physics good enough for you, our would you also like to know the qualitative nature of the entire world? Oh, and it has to do with the future, and making salvation decisions based on future possibilities. Most people think that they are "spirits" just inhabiting their bodies, when in reality, what they think are spirits, are just knowledge of such, inhabiting knowledge of their bodies. All of this dependent on the brain. All these kinds of people know there is something more to physical bodies and consciousness that zombie physics. So, claiming that is all there is, has no effect on them. But if you could experimentally prove to them, that they don't have ghosts that can survive without a brain, they just have knowledge of such, made of phenomenal qualites. When you think of being uploaded, what do you think it might, or could be like? Is it just a matter of creating a new and improved copy of yourself, and then destroying the old one? People know there is a big problem with that kind of thinking? Or is knowing, what that new and improved consciousness of yours will be like, is critically important, is it not? Is it not important that your knowledge of your spirit, be able to traverse between brains...? Knowing how to 'eff' the ineffable, and how to solve the problems of other minds is what this stuff is all about. If people have a better understanding of what they are, they can make far better decisions, like choosing cryonic preservation, on which their eternal salvation is dependent. Brent Allsop On Sat, Feb 14, 2015 at 2:58 PM, Stathis Papaioannou wrote: > > > On Sunday, February 15, 2015, William Flynn Wallace > wrote: > >> Here is my question: even if you could identify the exact electrical and >> chemical goings-on in the brain when a person experiences some sensation, >> what exactly will you know? How will it help you know something else? We >> know, for ex., that the amygdala is involved in emotions, particularly >> anger. How does that help us understand anger and how to deal with it? I >> am not saying that it is a waste of time, I just want to know what you >> intend to do with the answers? >> >> bill w >> > > We have treatments for neurological and psychiatric conditions that are > based on an understanding of the brain. > > > >> On Sat, Feb 14, 2015 at 9:57 AM, Stathis Papaioannou >> wrote: >> >>> >>> >>> On Sunday, February 15, 2015, John Clark wrote: >>> >>>> On Sat, Feb 14, 2015 Stathis Papaioannou wrote: >>>> >>>>> >>>> > Brent believes that consciousness is not due to the system >>>>> structure but rather to the substrate. >>>> >>>> >>>> If computers with the same logical structure but with different >>>> substrates (say vacuums tubes, Germanium transistors and Silicon >>>> transistors) all came up with different answers when you multiplied 27 by >>>> 54 I'd have a lot more confidence that this theory is correct, but they >>>> don't, they all come up with 1458. Actually calling this a theory is >>>> giving it too much credit as there is no experiment that can be performed >>>> to prove it wrong, or even a experiment that would allow you to learn a >>>> little more about it. >>>> >>> >>> There are experiments that can be performed - try a physically >>> different, but chemically identical substrate using alternative >>> isotopes. It would work just the same, proving that working just the same >>> is what is important, and not what the parts are made of. >>> >>> >>> -- >>> Stathis Papaioannou >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >>> >> > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Feb 16 18:42:15 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 16 Feb 2015 13:42:15 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Mon, Feb 16, 2015 at 11:49 AM, Stathis Papaioannou wrote: > >> Since you can't prove what causes consciousness you can't prove that >> moving the mind to a different substrate, like from biology to electronics, >> won't effect or destroy the consciousness. >> > > > Yes, you CAN prove just that, with the only assumption being that > consciousness, if it exists, > We don't need to assume that, we all know from direct experience that consciousness exists, or at least I know that mine does. > > is due to physical processes in your brain. > Sure, but WHICH physical process? If you're going to change the substrate your mind operates on from biology to electronics then, although the logical schematic would remain the same, some physical processes are going to change or be eliminated entirely. If one of those physical mechanisms produces consciousness and does nothing else (and the discovery of such a thing would prove Darwin wrong because it could never been created by natural selection) and your new mind doesn't include it then you would be just as intelligent as you are now or even more so, but you would not be conscious. However as I've said I think the probability of that idea being correct and Darwin being wrong is so ridiculously tiny it would be silly to worry about it. > > As I've tried to explain several times, the argument is a reductio ad > absurdum. If we say that a replacement part functions perfectly according > to every test but lacks the ability to sustain consciousness, that would > allow us to make you completely blind > Huh? > > but you would behave normally > Yes, you'd be a intelligent zombie, and I worry about the idea that intelligent zombies are possible about as much as I worry about me being the only conscious entity in the universe. Not much. > > according to any test we apply to you, and either honestly believe that > you had normal vision or, if you realised you were blind [...] > I don't understand. If you're not conscious then you're out of the realization game, you behave intelligently but you don't realize anything. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 16 21:18:12 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 17 Feb 2015 08:18:12 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Tuesday, 17 February 2015, John Clark wrote: > > > On Mon, Feb 16, 2015 at 11:49 AM, Stathis Papaioannou > wrote: > >> > >> Since you can't prove what causes consciousness you can't prove that >>> moving the mind to a different substrate, like from biology to electronics, >>> won't effect or destroy the consciousness. >>> >> >> > Yes, you CAN prove just that, with the only assumption being that >> consciousness, if it exists, >> > > We don't need to assume that, we all know from direct experience that > consciousness exists, or at least I know that mine does. > > >> > is due to physical processes in your brain. >> > > Sure, but WHICH physical process? > At this point, the only assumption is that it is due to some physical processes in the brain as opposed to, for example, an immaterial soul. The fewer assumptions you need to make in an argument, the more robust the argument. > If you're going to change the substrate your mind operates on from biology > to electronics then, although the logical schematic would remain the same, > some physical processes are going to change or be eliminated entirely. If > one of those physical mechanisms produces consciousness and does nothing > else (and the discovery of such a thing would prove Darwin wrong because it > could never been created by natural selection) and your new mind doesn't > include it then you would be just as intelligent as you are now or even > more so, but you would not be conscious. However as I've said I think the > probability of that idea being correct and Darwin being wrong is so > ridiculously tiny it would be silly to worry about it. > > >> > As I've tried to explain several times, the argument is a reductio ad >> absurdum. If we say that a replacement part functions perfectly according >> to every test but lacks the ability to sustain consciousness, that would >> allow us to make you completely blind >> > > Huh? > Essentially as you said above - if it were possible to separate consciousness from behaviour it should be possible to make a visual cortex which functions normally and put it in your brain. You would then lack visual perception - which is the definition of blindness - but you would behave normally - because the replacement part reproduces all the inputs and outputs of the original in its interface with the remaining brain tissue. > but you would behave normally >> > > Yes, you'd be a intelligent zombie, and I worry about the idea that > intelligent zombies are possible about as much as I worry about me being > the only conscious entity in the universe. Not much. > But you seem to accept that it is at least logically possible. And if it is logically possible, we can imagine a visual cortex as above, functioning perfectly but lacking consciousness. > > according to any test we apply to you, and either honestly believe that >> you had normal vision or, if you realised you were blind [...] >> > > I don't understand. If you're not conscious then you're out of the > realization game, you behave intelligently but you don't realize anything. > Consider the special case where most of your brain is intact, so that you can walk, talk, reason, experience emotions and so on in the normal conscious way. The only part that is altered is the visual cortex, replaced with a part manufactured by super-advanced aliens using exotic technologies which interfaces perfectly with the rest of your brain. The problem is, these aliens have no idea if you are conscious and no interest in preserving your consciousness; they are scientists and engineers only concerned with the observable functionality of your visual cortex. What would happen? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Feb 16 23:07:40 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 16 Feb 2015 18:07:40 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Mon, Feb 16, 2015 Stathis Papaioannou > At this point, the only assumption is that it [consciousness] is due to > some physical processes in the brain as opposed to, for example, an > immaterial soul. But I repeat, WHICH physical process? If you can't PROVE which physical process produces consciousness and you've changed or eliminated at least one physical process (and if you haven't then you haven't changed the substrate) then you can't PROVE that consciousness has been preserved. > Essentially as you said above - if it were possible to separate > consciousness from behaviour That's a big "if" but OK. > > it should be possible to make a visual cortex which functions normally > and put it in your brain. OK. > > You would then lack visual perception - which is the definition of > blindness OK. > but you would behave normally No, you wouldn't behave normally. If I threw a ball at you you'd be unable to catch it, the artificial visual cortex might be able to track the ball but that's only a small part of the brain, other parts, the conscious parts, decide that if would be fun to catch the ball the ball and then send nerve impulses to the muscles in your arm to actually do so. But you wouldn't decide to catch the ball because you are blind and didn't even know that a ball had been thrown. > > because the replacement part reproduces all the inputs and outputs of > the original in its interface with the remaining brain tissue. That sounds like you've replaced the entire brain not just the visual cortex, and I don't see a reductio ad absurdum in any of this. > we can imagine a visual cortex as above, functioning perfectly but > lacking consciousness. Sure, my visual cortex works pretty well but I don't think it by itself is conscious, and I think blind people with a malfunction visual cortex are just as conscious as I am, although I can't prove it. > Consider the special case where most of your brain is intact, so that you > can walk, talk, reason, experience emotions and so on in the normal > conscious way. The only part that is altered is the visual cortex, replaced > with a part manufactured by super-advanced aliens using exotic technologies > which interfaces perfectly with the rest of your brain. The problem is, > these aliens have no idea if you are conscious and no interest in > preserving your consciousness; they are scientists and engineers only > concerned with the observable functionality of your visual cortex. What > would happen? In that case I would certainly behave as I always did, and I very very strongly suspect my consciousness would be unaffected too, although I can't prove it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 17 00:07:09 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 17 Feb 2015 11:07:09 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On 17 February 2015 at 10:07, John Clark wrote: >> > Essentially as you said above - if it were possible to separate >> > consciousness from behaviour > > > That's a big "if" but OK. I believe that this is false, but that is the point of the argument - assume that zombie components are possible and see where it leads. >> > it should be possible to make a visual cortex which functions normally >> > and put it in your brain. > > > OK. > >> >> > You would then lack visual perception - which is the definition of >> > blindness > > > OK. > >> > but you would behave normally > > > No, you wouldn't behave normally. If I threw a ball at you you'd be unable > to catch it, the artificial visual cortex might be able to track the ball > but that's only a small part of the brain, other parts, the conscious parts, > decide that if would be fun to catch the ball the ball and then send nerve > impulses to the muscles in your arm to actually do so. But you wouldn't > decide to catch the ball because you are blind and didn't even know that a > ball had been thrown. But you would HAVE to behave normally, by definition. The artificial visual cortex receives input from the optic tracts, processes it, and sends output to association cortex and motor cortex. That is its design specification. That is its ONLY design specification: it is made by engineers who think consciousness is bullshit. My point is that such a device would, as an unintended side-effect, necessarily preserve consciousness. If it were possible to make a brain implant that did all the mechanistic stuff perfectly but lacked consciousness then you would end up with a being that was blind but behaved normally and thought it could see normally. But that is absurd - so it isn't possible to make such an implant. (It might be possible if consciousness is due to an immaterial soul - but as I said at the start, the assumption is that it is due to processes in the brain). -- Stathis Papaioannou From atymes at gmail.com Tue Feb 17 00:31:10 2015 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 16 Feb 2015 16:31:10 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: Sorry for the late response - busy last week. On Mon, Feb 9, 2015 at 11:13 AM, justin corwin wrote: > If there were a dedicated small payload rocket, I think more cubesats > might get made > *waves* http://cubecab.com/ We're working on it. If you want to help a bit, there's a student team running a KickStarter for an allied effort: https://www.kickstarter.com/projects/1092366470/project-spartan-spear . Or if you happen to know investors (angels, people with enough assets that the SEC calls them "accredited investors", and/or corporate money looking to develop such a capability), we'll happily take introductions to them. > Also, currently small payloads often have to wait, which mean timely > payloads need dedicated launchers. If you could guarantee a launch within > six months, you might attract more business as well. > Yep that's a definite pain point we've noticed. We are planning to launch within six months of contract signing - possibly less, but the gating factor at that point is getting government clearance (mainly FAA, probably FCC, possibly NOAA & Department of Commerce, depending on who's launching for who and what the satellite does). In theory we might be able to pull sub-week turnarounds if all the agencies gave immediate approvals (which would probably only happen for NASA or USAF emergencies). As to the comment elsewhere on the thread about CubeSat launch prices going up: what appears to have happened is that there were certain providers who promised cheap launch, but then rarely (possibly never) came through. So their low prices were always fictional. (I have heard of at least one tale - I'm not sure whether it is entirely true - where a would-be launch provider just folded, absconding with the customer's money and satellite.) Others thought they could cram on another CubeSat for not much - if you have a few kg spare capacity anyway, what can it hurt - but ran into organizational problems, such as main payload customers objecting to the perceived added risk the extra payloads brought, so those launch providers had to renege or find other ways to honor the launch commitment. Now that both of these classes are getting discredited, the higher-price-but-will-actually-launch providers' prices are beginning to dominate. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 17 03:16:11 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 16 Feb 2015 22:16:11 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Mon, Feb 16, 2015 Stathis Papaioannou wrote: > > you would HAVE to behave normally, by definition. The artificial visual cortex receives input from the optic tracts, processes it, and > sends output to association cortex and motor cortex. That is its > design specification. Then behavior would be the same. And I assume that, although functionally identical with the same logical schematic, this artificial visual cortex uses a different substrate such as electronics; otherwise the thought experiment wouldn't be worth much. > > That is its ONLY design specification: it is made by engineers who think > consciousness is bullshit. My point is that such a device would, as an > unintended side-effect, necessarily preserve consciousness. I think so too, I would bet my life on it but I can't prove it. I can't prove or disprove that blind people aren't conscious because it's the biological visual cortex itself that produces consciousness. And I can't prove or disprove that people lacking a left big toe are not conscious because it is that toe that generates consciousness. I think both logical possibilities are equally likely. > > If it were possible to make a brain implant that did all the mechanistic > stuff perfectly but lacked consciousness then you would end up with a being > that was blind The being had a working visual cortex, how could it be blind? > > but behaved normally and thought it could see normally. And the being was correct, it could see; it was probably conscious too but it could certainty see. > > But that is absurd I'm still not seeing what's absurd. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 17 03:52:22 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 16 Feb 2015 22:52:22 -0500 Subject: [ExI] Driverless racecars In-Reply-To: <013001d049b4$34305cb0$9c911610$@att.net> References: <2383061459-14801@secure.ericade.net> <013001d049b4$34305cb0$9c911610$@att.net> Message-ID: On Mon, Feb 16, 2015 at 1:45 AM, spike wrote: > Soon we will have open market self-driving cars, but even those of us who > anticipate these things will not trust them. I won?t: I will be a Nervous > Nellie the whole time the thing is driving. But consider once they get > good enough you have a stadium motocross with a pack of guys chasing one > robo-bike and can?t catch it. That entire stadium will be transformed. I > can?t help thinking there must be a way to make a buttload of money off of > this. > Maybe self driving bicycle messengers would be the best way for people to grow accustomed to driverless vehicles on public streets, bicycles aren't very big and don't go very fast (although faster than cars stuck in traffic jams) and so would be pretty non-threatening. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 17 05:10:43 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 17 Feb 2015 16:10:43 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On 17 February 2015 at 14:16, John Clark wrote: > On Mon, Feb 16, 2015 Stathis Papaioannou wrote: > >> >> > you would HAVE to behave normally, by definition. The artificial >> visual cortex receives input from the optic tracts, processes it, and >> sends output to association cortex and motor cortex. That is its >> design specification. > > > Then behavior would be the same. And I assume that, although functionally > identical with the same logical schematic, this artificial visual cortex > uses a different substrate such as electronics; otherwise the thought > experiment wouldn't be worth much. Yes. >> > That is its ONLY design specification: it is made by engineers who think >> > consciousness is bullshit. My point is that such a device would, as an >> > unintended side-effect, necessarily preserve consciousness. > > > I think so too, I would bet my life on it but I can't prove it. I can't > prove or disprove that blind people aren't conscious because it's the > biological visual cortex itself that produces consciousness. And I can't > prove or disprove that people lacking a left big toe are not conscious > because it is that toe that generates consciousness. I think both logical > possibilities are equally likely. > >> >> > If it were possible to make a brain implant that did all the mechanistic >> > stuff perfectly but lacked consciousness then you would end up with a being >> > that was blind > > > The being had a working visual cortex, how could it be blind? Because the visual cortex is perfectly functional according to any test you do on it but lacks consciousness. It is made by engineers who think consciousness is bullshit. >> > but behaved normally and thought it could see normally. > > > And the being was correct, it could see; it was probably conscious too but > it could certainty see. > >> >> > But that is absurd > > > I'm still not seeing what's absurd. If it is possible to separate consciousness from function then it is possible to make a visual cortex that has normal function but lacks consciousness, so if you put it into your brain you would lack all visual perception but function normally and believe you could see normally. That would be absurd - I think you have agreed. Therefore, it is not possible to make a functional analogue of your visual cortex that lacks consciousness. The conscious comes as a necessary side-effect, whether you want it there or not. -- Stathis Papaioannou From spike66 at att.net Tue Feb 17 04:58:30 2015 From: spike66 at att.net (spike) Date: Mon, 16 Feb 2015 20:58:30 -0800 Subject: [ExI] Driverless racecars In-Reply-To: References: <2383061459-14801@secure.ericade.net> <013001d049b4$34305cb0$9c911610$@att.net> Message-ID: <01ca01d04a6e$604d96e0$20e8c4a0$@att.net> From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Sent: Monday, February 16, 2015 7:52 PM To: ExI chat list Subject: Re: [ExI] Driverless racecars On Mon, Feb 16, 2015 at 1:45 AM, spike wrote: > Soon we will have open market self-driving cars, but even those of us who anticipate these things will not trust them. I won?t: I will be a Nervous Nellie the whole time the thing is driving. But consider once they get good enough you have a stadium motocross with a pack of guys chasing one robo-bike and can?t catch it. That entire stadium will be transformed. I can?t help thinking there must be a way to make a buttload of money off of this. Maybe self driving bicycle messengers would be the best way for people to grow accustomed to driverless vehicles on public streets, bicycles aren't very big and don't go very fast (although faster than cars stuck in traffic jams) and so would be pretty non-threatening. John K Clark That is one hell of an idea John. A bicycle is light, sturdy, cheap, a perfect testbed vehicle. It wouldn?t hurt it to fall over. The forces are light, so you could do battery power with a 10 Ah lithium. The biggest risk might be theft we might suppose, but for a testbed vehicle, it sounds like just the thing. We could even attempt a robo-unicycle. Again I think of racing, so we could imagine a standardized 1 cubic inch displacement 2-stroke single cylinder IC such as one might find on a model airplane, and have a closed course time-trial race. With a bicycle and the kinds of speeds one might achieve with a 1 cid motor (probably about 40 mph) we wouldn?t need to worry much about sliding and such. We might even be able to retrofit a standard bicycle with about a 19 tooth front sprocket and the 6 sprocket rear with derailleur aft. Something like that would be cheap to build out of standard bicycle parts. The visuals would be striking, changing hearts and minds. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Feb 17 10:34:39 2015 From: pharos at gmail.com (BillK) Date: Tue, 17 Feb 2015 10:34:39 +0000 Subject: [ExI] Driverless racecars In-Reply-To: <01ca01d04a6e$604d96e0$20e8c4a0$@att.net> References: <2383061459-14801@secure.ericade.net> <013001d049b4$34305cb0$9c911610$@att.net> <01ca01d04a6e$604d96e0$20e8c4a0$@att.net> Message-ID: On 17 February 2015 at 04:58, spike wrote: > That is one hell of an idea John. A bicycle is light, sturdy, cheap, a > perfect testbed vehicle. It wouldn't hurt it to fall over. The forces are > light, so you could do battery power with a 10 Ah lithium. The biggest risk > might be theft we might suppose, but for a testbed vehicle, it sounds like > just the thing. We could even attempt a robo-unicycle. > I think theft of devices will soon totally cease. Now that smartphones have GPS location services and get disconnected when stolen, theft has stopped. That will soon apply to everything that can get a chip installed. BillK From pharos at gmail.com Tue Feb 17 13:44:23 2015 From: pharos at gmail.com (BillK) Date: Tue, 17 Feb 2015 13:44:23 +0000 Subject: [ExI] Driverless cars for law enforcement Message-ID: A point that I don't remember anyone mentioning, is that driverless cars have cameras pointing all around. This implies that when used on police patrol duties they could be fining every casual traffic law infringement. What a money raiser for towns! If this is allowed, it will force human drivers to change into using driverless cars only. The changeover could be quicker than we expect. BillK From painlord2k at libero.it Tue Feb 17 13:54:58 2015 From: painlord2k at libero.it (Mirco Romanato) Date: Tue, 17 Feb 2015 14:54:58 +0100 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: References: Message-ID: <54E34832.4020507@libero.it> Il 17/02/2015 14:44, BillK ha scritto: > A point that I don't remember anyone mentioning, is that driverless > cars have cameras pointing all around. This implies that when used on > police patrol duties they could be fining every casual traffic law > infringement. What a money raiser for towns! > If this is allowed, it will force human drivers to change into using > driverless cars only. The changeover could be quicker than we expect. I have the suspect if all laws were uphold and enforced, there would be a lot of problem driving around with or without driverless cars. Mirco From foozler83 at gmail.com Tue Feb 17 14:56:46 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 17 Feb 2015 08:56:46 -0600 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: <54E34832.4020507@libero.it> References: <54E34832.4020507@libero.it> Message-ID: I have the suspect if all laws were uphold and enforced, there would be > a lot of problem driving around with or without driverless cars. > > Mirco > > ?Well, finally. A way to cut down or even eliminate deaths from cars. I am a libertarian, but it is just stupid the way Americans view their cars. Long ago we could have passed laws so that a car could not go faster than, say, 75. How many are dead because we never did that? In my younger days I hated it when a car was going faster than me. So I sped up and passed them. Yes, I have gotten numerous tickets but it never stopped me. Extremely irresponsible. So make cars that get radio signals mandating a certain speed limit and so cannot go faster than that.? ? Put alcohol testers in every car so that they won't start if the driver is impaired. Clearly we won't stop speeding and drinking by ourselves, so let's force ourselves to do it. So - no more reason to own a Corvette or Ferrari. Good. Let those people act out their fantasies off the roads. There is no justification for other people paying, often with their lives, for some others' testosterone-fueled speeding and reckless, drunken driving. Why haven't we done this before? Because men won't grow up. bill w? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Tue Feb 17 15:36:10 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Tue, 17 Feb 2015 08:36:10 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: Hi John, You keep saying: "You can't prove if something else is conscious." But, does your left brain hemisphere not know, more than we know anything, not only that your right hemisphere is it conscious, but what it is qualitatively like. And if that is possible, why are you assuming we can't do the same thing the corpus callosum is doing, between brains, not just between brain hemispheres? You and Stathis keep talking about separating consciousness from behavior. If we are talking about real glutamate vs zombie glutamate, you must agree that real glutamate can behave the way it does, because of it's intrinsic physical glutamate properties. Where as, even though zombie glutamate can behave the same way, it can only do so if it has interpretation hardware that interprets that which, by definition does not have glutamate properties, as if it did. So this proves it is possible to reproduce zombie glutamate (or zombie functional isomorph, if you must) behavior, without consciousness, um I mean without real glutamate intrinsic properties (hint: these are the same thing). So I don't understand why both of you seem to be so completely missing the obvious? It seems to me that both of you continue to completely ignore these simple obvious facts? Brent Allsop On Mon, Feb 16, 2015 at 10:10 PM, Stathis Papaioannou wrote: > On 17 February 2015 at 14:16, John Clark wrote: > > On Mon, Feb 16, 2015 Stathis Papaioannou wrote: > > > >> > >> > you would HAVE to behave normally, by definition. The artificial > >> visual cortex receives input from the optic tracts, processes it, and > >> sends output to association cortex and motor cortex. That is its > >> design specification. > > > > > > Then behavior would be the same. And I assume that, although functionally > > identical with the same logical schematic, this artificial visual cortex > > uses a different substrate such as electronics; otherwise the thought > > experiment wouldn't be worth much. > > Yes. > > >> > That is its ONLY design specification: it is made by engineers who > think > >> > consciousness is bullshit. My point is that such a device would, as an > >> > unintended side-effect, necessarily preserve consciousness. > > > > > > I think so too, I would bet my life on it but I can't prove it. I can't > > prove or disprove that blind people aren't conscious because it's the > > biological visual cortex itself that produces consciousness. And I can't > > prove or disprove that people lacking a left big toe are not conscious > > because it is that toe that generates consciousness. I think both logical > > possibilities are equally likely. > > > >> > >> > If it were possible to make a brain implant that did all the > mechanistic > >> > stuff perfectly but lacked consciousness then you would end up with a > being > >> > that was blind > > > > > > The being had a working visual cortex, how could it be blind? > > Because the visual cortex is perfectly functional according to any > test you do on it but lacks consciousness. It is made by engineers who > think consciousness is bullshit. > > >> > but behaved normally and thought it could see normally. > > > > > > And the being was correct, it could see; it was probably conscious too > but > > it could certainty see. > > > >> > >> > But that is absurd > > > > > > I'm still not seeing what's absurd. > > If it is possible to separate consciousness from function then it is > possible to make a visual cortex that has normal function but lacks > consciousness, so if you put it into your brain you would lack all > visual perception but function normally and believe you could see > normally. That would be absurd - I think you have agreed. Therefore, > it is not possible to make a functional analogue of your visual cortex > that lacks consciousness. The conscious comes as a necessary > side-effect, whether you want it there or not. > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Feb 17 16:06:10 2015 From: sparge at gmail.com (Dave Sill) Date: Tue, 17 Feb 2015 11:06:10 -0500 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: References: <54E34832.4020507@libero.it> Message-ID: On Tue, Feb 17, 2015 at 9:56 AM, William Flynn Wallace wrote: > > I have the suspect if all laws were uphold and enforced, there would be > >> a lot of problem driving around with or without driverless cars. >> >> Mirco >> >> ?Well, finally. A way to cut down or even eliminate deaths from cars. > It's be pretty easy to combine maps and GPS automatically limit cars to the speed limit. > I am a libertarian, > Really? but it is just stupid the way Americans view their cars. > Lots of Americans and citizens of other countries are stupid about lots of things. Is a super-nanny-state the best solution? Long ago we could have passed laws so that a car could not go faster > than, say, 75. How many are dead because we never did that? > Excessive speed is one only factor contributing to auto fatalities. There's also lack of seatbelt use, mechanical failure, impairment, poor driving ability, distraction, road rage, etc. In my younger days I hated it when a car was going faster than me. So I > sped up and passed them. Yes, I have gotten numerous tickets but it never > stopped me. Extremely irresponsible. > Irresponsible, yes. But speed limits are sometimes ridiculously low and speeding isn't *always* unsafe. So make cars that get radio signals mandating a certain speed limit and so > cannot go faster than that.? > > ? Put alcohol testers in every car so that they won't start if the driver > is impaired. > And make drivers demonstrate real competence regularly. And mandate passive restraints. And rigorous annual vehicle inspections. And cell phone jammers. And wait we need a way to test for pot, meth, coke, opiate, ... impairment. And I'm sure I'm missing some things... Clearly we won't stop speeding and drinking by ourselves, so let's force > ourselves to do it. > Kindly return your libertarian membership card in the envelope provided. :-) So - no more reason to own a Corvette or Ferrari. Good. Let those people > act out their fantasies off the roads. > Right, because no sports car owner ever behaves responsibly. Yes, racing should be off-road, and that's currently the law everywhere. There is no justification for other people paying, often with their lives, > for some others' testosterone-fueled speeding and reckless, drunken driving. > Do you really want to go down the path of government preventing every possible accidental death? What's the justification for the death of person who's killed sitting at home reading a book by a plane that crashes into their house? Some big shot executives had meetings to get to? They can't telecommute? Some package needed to be delivered faster? They couldn't wait? Really? Why haven't we done this before? Because men won't grow up. > No, because people value the freedom to take risks and they don't want to live in protective bubbles for the rest of their lives "for their own safety". -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 17 16:31:21 2015 From: johnkclark at gmail.com (John Clark) Date: Tue, 17 Feb 2015 11:31:21 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Tue, Feb 17, 2015 Stathis Papaioannou wrote: >> The being had a working visual cortex, how could it be blind? > > > > Because the visual cortex is perfectly functional according to any test > you do on it but lacks consciousness. You're eyeball isn't conscious, or at least I don't think it is; does that mean that you are blind? > >> I'm still not seeing what's absurd. > > > > If it is possible to separate consciousness from function And I don't think that is possible but can't prove it, but I'll assume it can be done for the sake of argument. > > then it is possible to make a visual cortex that has normal function > but lacks consciousness, You don't need to assume it is possible to separate consciousness from function for that to be possible. I don't believe my visual cortex alone is conscious, nor do I think any one particular neuron in my brain is conscious. One water molecule is not wet but the Pacific Ocean is, one neuron is not conscious but 100 billion of them can be if they're wired up so that the network behaves intelligently. > > if you put it into your brain you would lack all visual perception No you wouldn't. > > but function normally Yes. > > and believe you could see normally. And you'd be correct, you could see normally. > > That would be absurd - I think you have agreed. No, I never agreed with that. > The conscious comes as a necessary side-effect, whether you want it > there or not. I'd bet my life that's true, but I can't prove it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 17 17:22:25 2015 From: johnkclark at gmail.com (John Clark) Date: Tue, 17 Feb 2015 12:22:25 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Tue, Feb 17, 2015 Brent Allsop wrote: > You keep saying: "You can't prove if something else is conscious." But, > does your left brain hemisphere not know, more than we know anything, not > only that your right hemisphere is it conscious, but what it is > qualitatively like. > I don't need a proof to convince myself I'm conscious because I have something much better than proof, direct experience. > > And if that is possible, why are you assuming we can't do the same thing > the corpus callosum is doing, between brains, not just between brain > hemispheres? > If you and I were as tightly linked as the hemispheres of our brains are then Mr. Brent Clark would know from direct experience that he is conscious, but neither John Clark nor Brent Allsop would be able to have that experience, and neither would know what it's like to be Brent Clark. That's the basic reason I thought Thomas Nagel's much ballyhooed essay "What is it like to be a bat" was sorta silly. > > If we are talking about real glutamate vs zombie glutamate, you must > agree that real glutamate can behave the way it does, because of it's > intrinsic physical glutamate properties. > EVERYTHING behaves the way it does, because of it's intrinsic physical properties. > > Where as, even though zombie glutamate can behave the same way, > Then it's neither zombie nor real, it's just glutamate. > it can only do so if it has interpretation hardware that interprets that > which, by definition does not have glutamate properties, as if it did. > And a Oxygen atom can only be considered part of a water molecule if 2 Hydrogen atoms make the "interpretation". And I still don't have the slightest understanding what in the world "zombie glutamate" is supposed to be or what makes it less real than "real glutamate". John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 17 17:58:32 2015 From: johnkclark at gmail.com (John Clark) Date: Tue, 17 Feb 2015 12:58:32 -0500 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: References: <54E34832.4020507@libero.it> Message-ID: On Tue, Feb 17, 2015 , Dave Sill wrote: > Irresponsible, yes. But speed limits are sometimes ridiculously low and > speeding isn't *always* unsafe. > Agreed, but what really gets me is that in 2015 most traffic lights just run off a timer (usually mistimed and out of phase with other traffic lights) and are as stupid as they were in 1950. I sometimes work long hours and drive home late at night, I get a red light and look left and right down the side street and as far as the eye can see there are no cars on it, you could safely land a light airplane on it, and yet the light stubbornly remains red. And in the daytime when traffic is much greater my light turns green but I can't move one inch because 300 feet further ahead another light is red and the cars are backed up all the way to me, then the distant light turns green and things just start to move and then my light turns red. This problem went on for years but then the traffic department decided to do something about it, rather than install lights that have at least a little more intelligence than the clockwork in my washing machine they put up a "Don't block intersection" sign and considered the problem solved. I feel like putting up a "I won't block the intersection if you don't mistime the lights" sign next to it. With today's technology there is no excuse for this sort of stupidity. Before he started Microsoft Bill Gates's first company analyzed traffic patterns for city governments, maybe he should get back in that business and make cheap but intelligent traffic lights. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Feb 17 19:38:27 2015 From: sparge at gmail.com (Dave Sill) Date: Tue, 17 Feb 2015 14:38:27 -0500 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: References: <54E34832.4020507@libero.it> Message-ID: On Tue, Feb 17, 2015 at 12:58 PM, John Clark wrote: > > On Tue, Feb 17, 2015 , Dave Sill wrote: > > > Irresponsible, yes. But speed limits are sometimes ridiculously low and >> speeding isn't *always* unsafe. >> > > Agreed, but what really gets me is that in 2015 most traffic lights just > run off a timer (usually mistimed and out of phase with other traffic > lights) and are as stupid as they were in 1950. > I couldn't agree more, John. Timing lights seems like such a simple problem compared to, e.g., self-driving cars. But if we can't do that simple thing, how are going to make real progress? Imagine self-driving cars that advertized their project routes to a central traffic control system that could not only time lights but adjust the speed and routes of cars to optimize flow? Of course, the biggest wins would be when there are no human-driven cars on the road. Red lights would be unnecessary because cars could safely interleave through intersections. Passengers might not want to watch, though. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 17 23:38:18 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 18 Feb 2015 10:38:18 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On 18 February 2015 at 02:36, Brent Allsop wrote: > > Hi John, > > You keep saying: "You can't prove if something else is conscious." But, > does your left brain hemisphere not know, more than we know anything, not > only that your right hemisphere is it conscious, but what it is > qualitatively like. And if that is possible, why are you assuming we can't > do the same thing the corpus callosum is doing, between brains, not just > between brain hemispheres? That's an interesting and relevant point about the left and right hemispheres of the brain. It does, however, illustrate that the only way we can really know what it is like to experience something is to become a part of the system that is doing the experiencing. I could attempt to find out what it is like to be a bat by interfacing with a bat's brain (and then the bat would also find out what it is like to be me). An objection to this, however, is that I would not be finding out what it is like to be a bat, but rather a bat-human hybrid, which may be quite different. And there is no obvious way I can see to find out what it is like to be a more alien system such as a thermostat, for example. > You and Stathis keep talking about separating consciousness from behavior. > > If we are talking about real glutamate vs zombie glutamate, you must agree > that real glutamate can behave the way it does, because of it's intrinsic > physical glutamate properties. Yes. > Where as, even though zombie glutamate can > behave the same way, it can only do so if it has interpretation hardware > that interprets that which, by definition does not have glutamate > properties, as if it did. So this proves it is possible to reproduce zombie > glutamate (or zombie functional isomorph, if you must) behavior, without > consciousness, That's begging the question: my contention is that if the glutamate substitute can mimic all the properties of glutamate relevant to brain function, then it will necessarily also contribute to consciousness in whatever way natural glutamate does. The function of glutamate in the brain is to bind to glutamate receptors and change their conformation, thus triggering a series of events in the postsynaptic neuron. Glutamate has other properties, for example if you mix it with potassium nitrate and light it you can make fireworks, but those properties are not relevant to brain functioning and you can ignore them if making a glutamate substitute. > um I mean without real glutamate intrinsic properties (hint: > these are the same thing). So I don't understand why both of you seem to be > so completely missing the obvious? It seems to me that both of you continue > to completely ignore these simple obvious facts? Well, to me it also seems that you are missing the obvious (and John also, and he actually agrees with me!). The obvious is this: if you try to make zombie glutamate you will fail, because if the substitute glutamate has the relevant functional properties (i.e. it binds to glutamate receptors and changes their conformation) then it will necessarily also replicate any role natural glutamate plays in consciousness. For if it were possible to make substitute glutamate that performed the same as natural glutamate but did not replicate natural glutamate's role in consciousness, then you could create a being lacking an aspect of consciousness (likely a very big aspect, since glutamate is so widespread in the brain) but behaving normally and believing that they feel normal. I have repeated the last sentence many times in many different ways but it doesn't seem to get through. Maybe it is because you think that the functional isomorph of glutamate would NOT necessarily result in normal behaviour? But then it wouldn't be a functional isomorph! Maybe you think the functional isomorph would result in normal behaviour but consciousness would still be altered? But then there would be a decoupling between consciousness and behaviour: the subject could be blind, or in terrible pain, and his mouth would of its own accord smile and make noises indicating that everything was fine! -- Stathis Papaioannou From possiblepaths2050 at gmail.com Tue Feb 17 23:59:04 2015 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 17 Feb 2015 16:59:04 -0700 Subject: [ExI] Transhumanist valentines In-Reply-To: <009901d048cb$65b9c9e0$312d5da0$@att.net> References: <2318667712-22999@secure.ericade.net> <009901d048cb$65b9c9e0$312d5da0$@att.net> Message-ID: You two are such romantics at heart... Anders, thanks for the awesome links! And Spike, you are a silver tongued devil! You're a man who really know how to sweet talk a lady with a background in engineering and science! lol John : ) On Sat, Feb 14, 2015 at 7:59 PM, spike wrote: > > > > > *From:* extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] *On > Behalf Of *Anders Sandberg > *?* > > > > But can we do better? > > > > "You are my singularity" > > "I love you with all my brain" > > "You are the best enhancer science knows" > > "My love for you is growing faster than exponential" > > > > Any others? Anders Sandberg, Future of Humanity Institute Philosophy > Faculty of Oxford University > > > > > > My feelings for you transcends mere love. You stimulate the production of > endorphins within me and cause surges of dopamine to the pleasure centers > of my brain. > > > > My bride always falls for that one. In her case I mean it from the bottom > of my brainstem: she really does all that. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Feb 18 02:44:15 2015 From: johnkclark at gmail.com (John Clark) Date: Tue, 17 Feb 2015 21:44:15 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Tue, Feb 17, 2015 Stathis Papaioannou wrote: > my contention is that if the glutamate substitute can mimic all the > properties of glutamate relevant to brain function, then it will > necessarily also contribute to consciousness in whatever way natural > glutamate does. I'm just playing devil's advocate here but it could be argued that there is one brain function that you have absolutely no way of knowing if the substitute glutamate has successfully mimicked or not, the consciousness function. It could be argued that the substitute glutamate works great on everything you can test in the lab, everything you can count or measure, but the substitute doesn't work for conscious, which therefor must have been created by something other than Evolution. Please note that I think the above idea is very silly, but I can't prove it wrong and never will be able to. And that is the very definition of a silly theory. Perhaps I'm just taking the word "proof" a little too seriously, I'm think of a mathematician nitpicking over every line, but if you mean evidence so overwhelming that it is correct life is too short to worry about it being wrong then I agree with you. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Feb 18 10:31:01 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 18 Feb 2015 21:31:01 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On 18 February 2015 at 13:44, John Clark wrote: > On Tue, Feb 17, 2015 Stathis Papaioannou wrote: > >> > my contention is that if the glutamate substitute can mimic all the >> properties of glutamate relevant to brain function, then it will necessarily >> also contribute to consciousness in whatever way natural glutamate does. > > > I'm just playing devil's advocate here but it could be argued that there is > one brain function that you have absolutely no way of knowing if the > substitute glutamate has successfully mimicked or not, the consciousness > function. It could be argued that the substitute glutamate works great on > everything you can test in the lab, everything you can count or measure, but > the substitute doesn't work for conscious, which therefor must have been > created by something other than Evolution. > > Please note that I think the above idea is very silly, but I can't prove it > wrong and never will be able to. And that is the very definition of a silly > theory. Perhaps I'm just taking the word "proof" a little too seriously, I'm > think of a mathematician nitpicking over every line, but if you mean > evidence so overwhelming that it is correct life is too short to worry about > it being wrong then I agree with you. If that idea is true it leads to worse than just silliness; it invalidates the idea of consciousness. Functions in the brain are, to an extent, localised. The visual cortex is where perception occurs, Broca's and Wernicke's areas are where speech understanding and production are controlled. We know this because if a part of the brain is damaged it results in specific deficits in function, while other functions are left unaffected. So if the visual cortex is taken out the subject can't see, although he can speak normally, and he says, "I can't see". This is because there are neural connections between the speech and visual centres in the brain. The eyes register an image, signals travel to the visual cortex, neural processing occurs there, and output is sent from the visual cortex to other parts of the brain, including the speech centres, where more processing occurs. The speech centres then send output to motor neurones controlling the vocal cords and speech is produced. Now, what happens if you replace the visual cortex with a perfect functional analogue which, however, lacks the special "function" of consciousness? The scientists and engineers designing this device will replicate EVERY FUNCTION THAT CAN POSSIBLY BE SCIENTIFICALLY TESTED FOR. They may leave out the functions that can't be tested for, of which consciousness may be one, but they will do a good job with all the others. The artificial device will have the same connections to the surrounding brain tissue as the original visual cortex did, and it will accept input, process it, and send output to other parts of the brain just as the original visual cortex did. If there is some other way the different parts of the brain communicate with each other that we don't know about, the super competent designers will discover it and find a way to reproduce it. Now I hope you can see that if EVERY FUNCTION THAT CAN POSSIBLY BE SCIENTIFICALLY TESTED FOR is incorporated into the artificial visual cortex then it will receive and process input and send output to the rest of the brain in the same way as the original visual cortex; for if not, that would be a difference in a FUNCTION THAT CAN POSSIBLY BE SCIENTIFICALLY TESTED FOR, rather than a difference in one of the functions that can't be tested for, like consciousness. And if the output coming from the artificial visual cortex is completely normal, the subject will behave completely normally; signals will go from the retina via multiple neural and artificial relays to the vocal cords and the subject will declare that he can see perfectly normally. However, what would happen if there is a function that can't be scientifically tested for, responsible for visual perception (i.e. consciousness) in the cortex? That function would be left out and the subject would be blind; but because the artificial visual cortex is sending all the right signals to his speech centres, and every other part of his brain, he doesn't realise he is blind and he still declares that he can see normally. So do you see the problem with this? If consciousness is not a necessary side-effect of observable processes, it would be possible to remove a major aspect of a person's consciousness, such as visual perception, but they would behave normally and they would not notice that anything had changed. Which would mean that you could have gone blind an hour ago but haven't realised it. In which case, what is the difference between being blind and not being blind? And, extending the argument by gradually extending the extent of the brain replacement, what is the difference between conscious and not being conscious? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Wed Feb 18 13:07:09 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Wed, 18 Feb 2015 06:07:09 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: <54E48E7D.4040308@canonizer.com> Hi Stathis, On 2/17/2015 4:38 PM, Stathis Papaioannou wrote: > On 18 February 2015 at 02:36, Brent Allsop wrote: >> Hi John, >> >> You keep saying: "You can't prove if something else is conscious." But, >> does your left brain hemisphere not know, more than we know anything, not >> only that your right hemisphere is it conscious, but what it is >> qualitatively like. And if that is possible, why are you assuming we can't >> do the same thing the corpus callosum is doing, between brains, not just >> between brain hemispheres? > That's an interesting and relevant point about the left and right > hemispheres of the brain. It does, however, illustrate that the only > way we can really know what it is like to experience something is to > become a part of the system that is doing the experiencing. I could > attempt to find out what it is like to be a bat by interfacing with a > bat's brain (and then the bat would also find out what it is like to > be me). An objection to this, however, is that I would not be finding > out what it is like to be a bat, but rather a bat-human hybrid, which > may be quite different. And there is no obvious way I can see to find > out what it is like to be a more alien system such as a thermostat, > for example. You must have missed the section in the paper pointing out the difference between compost vs elemental quale. You are talking about compost qualia, here, and I completely agree with you. But effing an elemental redness quale is very different. And, once you can bridge the explanatory gap with elemental quale, more complex types of composite effing are just more complex "easy" variations on the theme. > Well, to me it also seems that you are missing the obvious (and John > also, and he actually agrees with me!). The obvious is this: if you > try to make zombie glutamate you will fail, because if the substitute > glutamate has the relevant functional properties (i.e. it binds to > glutamate receptors and changes their conformation) then it will > necessarily also replicate any role natural glutamate plays in > consciousness. For if it were possible to make substitute glutamate > that performed the same as natural glutamate but did not replicate > natural glutamate's role in consciousness, then you could create a > being lacking an aspect of consciousness (likely a very big aspect, > since glutamate is so widespread in the brain) but behaving normally > and believing that they feel normal. I have repeated the last sentence > many times in many different ways but it doesn't seem to get through. > Maybe it is because you think that the functional isomorph of > glutamate would NOT necessarily result in normal behaviour? But then > it wouldn't be a functional isomorph! Maybe you think the functional > isomorph would result in normal behaviour but consciousness would > still be altered? But then there would be a decoupling between > consciousness and behaviour: the subject could be blind, or in > terrible pain, and his mouth would of its own accord smile and make > noises indicating that everything was fine! OK, thanks for pointing out that for functionalists (the target audience of this paper) I've not been adequately addressing this issue, either here, or in the paper. (working on fixing that) So the corollary to "if there is no detectable neural correlate to redness, someone will not be accurately experiencing redness." is "You will not be able to accurately believe you are experiencing redness, without the neural correlate." So, yes, the neural substitution will completely fail. And if it succeeds, I will admit that materialist theories have been falsified. The prediction is that nobody will be able to find a way to present to the binding system anything that is not the neural correlate of redness, and get a true redness experience. There are myriads of various possibilities, such as, in the inverted quale case, someone will believe they know what redness is like, when in reality they are just mistakenly thinking that the greenness quale is redness. Or there is the lying through their teeth, example, they know that 1 isn't really redness, nor is the 0 really greenness. They are just knowingly lying when they say: "That is red, and I know what it is qualitatively like." And, the prediction is that observation systems like that being used by Gallant will be able to objectively detect and prove exactly when all of this type of stuff is going on. So, no, without real glutamate, or without the detectable functional isomorph of redness is, whatever is causing them to accurately think they are experiencing redness, will not be possible without it. Otherwise the theory, which predicts you can't experience redness, without the reliably detectable intrinsic functional isomorph of redness, with out the real thing, will be falsified, or need to be adjusted. There must be something that is is a correct redness experience. When we are aware of redness, we are detecting this redness, and we are able to distinguish this from greenness. This must be true, regardless of whether the relationship is functional, material, quantum, or whatever. And whatever the brain is doing to do this detection that redness is qualitatively different than greenness, must be objectively discoverable, reproducible, mappable, and ultimately effable. Brent From brent.allsop at canonizer.com Wed Feb 18 18:16:36 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Wed, 18 Feb 2015 11:16:36 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: Hi Stathis, It feels to me that the entire paper has come together, in a very compelling way, except for the section at the end where I attempt to address the neural substitution argument. I have a real hard time getting my head arround the way a functionalist thinks. I know a bunch of the stuff I end up saying is completely useless, and I know I can say the right thing, but I just don't know how to put it in a way that will be understood, as best as possible, to a functionalist. The current section on the neural substitution argument at the end is just a loose collection of ideas I'm tyring to put together. I wonder if you could provide some feedback on if any of that is good, or a complete waste, and so on. And can I get you, or anyone, to state the issue you have, in general, with the idea of the paper. https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit Thanks, Brent Allsop On Wed, Feb 18, 2015 at 3:31 AM, Stathis Papaioannou wrote: > > > On 18 February 2015 at 13:44, John Clark wrote: > > On Tue, Feb 17, 2015 Stathis Papaioannou wrote: > > > >> > my contention is that if the glutamate substitute can mimic all the > >> properties of glutamate relevant to brain function, then it will > necessarily > >> also contribute to consciousness in whatever way natural glutamate does. > > > > > > I'm just playing devil's advocate here but it could be argued that there > is > > one brain function that you have absolutely no way of knowing if the > > substitute glutamate has successfully mimicked or not, the consciousness > > function. It could be argued that the substitute glutamate works great on > > everything you can test in the lab, everything you can count or measure, > but > > the substitute doesn't work for conscious, which therefor must have been > > created by something other than Evolution. > > > > Please note that I think the above idea is very silly, but I can't prove > it > > wrong and never will be able to. And that is the very definition of a > silly > > theory. Perhaps I'm just taking the word "proof" a little too seriously, > I'm > > think of a mathematician nitpicking over every line, but if you mean > > evidence so overwhelming that it is correct life is too short to worry > about > > it being wrong then I agree with you. > > If that idea is true it leads to worse than just silliness; it invalidates > the idea of consciousness. > > Functions in the brain are, to an extent, localised. The visual cortex is > where perception occurs, Broca's and Wernicke's areas are where speech > understanding and production are controlled. We know this because if a part > of the brain is damaged it results in specific deficits in function, while > other functions are left unaffected. So if the visual cortex is taken out > the subject can't see, although he can speak normally, and he says, "I > can't see". This is because there are neural connections between the speech > and visual centres in the brain. The eyes register an image, signals travel > to the visual cortex, neural processing occurs there, and output is sent > from the visual cortex to other parts of the brain, including the speech > centres, where more processing occurs. The speech centres then send output > to motor neurones controlling the vocal cords and speech is produced. > > Now, what happens if you replace the visual cortex with a perfect > functional analogue which, however, lacks the special "function" of > consciousness? The scientists and engineers designing this device will > replicate EVERY FUNCTION THAT CAN POSSIBLY BE SCIENTIFICALLY TESTED FOR. > They may leave out the functions that can't be tested for, of which > consciousness may be one, but they will do a good job with all the others. > The artificial device will have the same connections to the surrounding > brain tissue as the original visual cortex did, and it will accept input, > process it, and send output to other parts of the brain just as the > original visual cortex did. If there is some other way the different parts > of the brain communicate with each other that we don't know about, the > super competent designers will discover it and find a way to reproduce it. > > Now I hope you can see that if EVERY FUNCTION THAT CAN POSSIBLY BE > SCIENTIFICALLY TESTED FOR is incorporated into the artificial visual cortex > then it will receive and process input and send output to the rest of the > brain in the same way as the original visual cortex; for if not, that would > be a difference in a FUNCTION THAT CAN POSSIBLY BE SCIENTIFICALLY TESTED > FOR, rather than a difference in one of the functions that can't be tested > for, like consciousness. And if the output coming from the artificial > visual cortex is completely normal, the subject will behave completely > normally; signals will go from the retina via multiple neural and > artificial relays to the vocal cords and the subject will declare that he > can see perfectly normally. However, what would happen if there is a > function that can't be scientifically tested for, responsible for visual > perception (i.e. consciousness) in the cortex? That function would be left > out and the subject would be blind; but because the artificial visual > cortex is sending all the right signals to his speech centres, and every > other part of his brain, he doesn't realise he is blind and he still > declares that he can see normally. > > So do you see the problem with this? If consciousness is not a necessary > side-effect of observable processes, it would be possible to remove a major > aspect of a person's consciousness, such as visual perception, but they > would behave normally and they would not notice that anything had changed. > Which would mean that you could have gone blind an hour ago but haven't > realised it. In which case, what is the difference between being blind and > not being blind? And, extending the argument by gradually extending the > extent of the brain replacement, what is the difference between conscious > and not being conscious? > > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Feb 18 18:29:25 2015 From: johnkclark at gmail.com (John Clark) Date: Wed, 18 Feb 2015 13:29:25 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Wed, Feb 18, 2015 Stathis Papaioannou wrote: > Functions in the brain are, to an extent, localised. > Memory doesn't seem to be localized, and there is no way to know, or at least no way to prove, if consciousness is. > > if a part of the brain is damaged it results in specific deficits in > function, while other functions are left unaffected. > If region X of the brain is damaged we know that some behaviors change and others do not, we can make educated guesses but we have no way of PROVING if consciousness is destroyed or not. > > So if the visual cortex is taken out the subject can't see, although he > can speak normally > And the exact same thing would happen if the eyeballs of the subject were taken out. What does that teach you about the nature of consciousness? Nothing as far as I can tell. > Now, what happens if you replace the visual cortex with a perfect > functional analogue which, however, lacks the special "function" of > consciousness? > If it's the biological visual cortex that generates your consciousness and it is removed and replaced by a electronic visual cortex that does everything just as well as the biological version EXCEPT for generating consciousness then a intelligent conscious being has been turned into a intelligent zombie. > > Now I hope you can see that if EVERY FUNCTION THAT CAN POSSIBLY BE > SCIENTIFICALLY TESTED FOR is incorporated into the artificial visual cortex > then it will receive and process input and send output to the rest of the > brain in the same way as the original visual cortex; > Yes. > > the subject will behave completely normally; > Yes. > > what would happen if there is a function that can't be scientifically > tested for, responsible for visual perception (i.e. consciousness) in the > cortex? > They're not the same thing, visual perception can be tested for, consciousness of what is perceived can not be. > > That function would be left out and the subject would be blind > If a being responds to light in it's environment then it may or may not be conscious, but it is certainly not blind. > > but because the artificial visual cortex is sending all the right > signals to his speech centres, and every other part of his brain, he > doesn't realise he is blind > If the biological visual cortex is what generates consciousness and it has been removed then he doesn't realize ANYTHING, he's a zombie. He could still be intelligent witty charming and sexy but he would have no more consciousness than a brick. > > and he still declares that he can see normally. > Yes because his behavior is unaffected, he said "I can see normally" before the operation so he'd say the same thing after it. > it would be possible to remove a major aspect of a person's > consciousness, such as visual perception, but they would behave normally > and they would not notice that anything had changed. Their bodies would behave just as it always did but they wouldn't notice anything, they're zombies. > > So do you see the problem with this? > Yes, the theory that intelligent behavior and consciousness can be separated can't be proven wrong and will never be proven wrong, so the idea is silly, very very silly. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Wed Feb 18 17:10:17 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Wed, 18 Feb 2015 18:10:17 +0100 Subject: [ExI] Zombie glutamate In-Reply-To: <54E48E7D.4040308@canonizer.com> References: <54E48E7D.4040308@canonizer.com> Message-ID: Since people do talk about the consciousness this much ... I have a theory, that they indeed are conscious. Just as I am. Had they weren't conscious, they wouldn't bother debating it so passionately. On Wed, Feb 18, 2015 at 2:07 PM, Brent Allsop wrote: > > Hi Stathis, > > On 2/17/2015 4:38 PM, Stathis Papaioannou wrote: > >> On 18 February 2015 at 02:36, Brent Allsop >> wrote: >> >>> Hi John, >>> >>> You keep saying: "You can't prove if something else is conscious." But, >>> does your left brain hemisphere not know, more than we know anything, not >>> only that your right hemisphere is it conscious, but what it is >>> qualitatively like. And if that is possible, why are you assuming we >>> can't >>> do the same thing the corpus callosum is doing, between brains, not just >>> between brain hemispheres? >>> >> That's an interesting and relevant point about the left and right >> hemispheres of the brain. It does, however, illustrate that the only >> way we can really know what it is like to experience something is to >> become a part of the system that is doing the experiencing. I could >> attempt to find out what it is like to be a bat by interfacing with a >> bat's brain (and then the bat would also find out what it is like to >> be me). An objection to this, however, is that I would not be finding >> out what it is like to be a bat, but rather a bat-human hybrid, which >> may be quite different. And there is no obvious way I can see to find >> out what it is like to be a more alien system such as a thermostat, >> for example. >> > > You must have missed the section in the paper pointing out the difference > between compost vs elemental quale. You are talking about compost qualia, > here, and I completely agree with you. But effing an elemental redness > quale is very different. And, once you can bridge the explanatory gap with > elemental quale, more complex types of composite effing are just more > complex "easy" variations on the theme. > > Well, to me it also seems that you are missing the obvious (and John >> also, and he actually agrees with me!). The obvious is this: if you try to >> make zombie glutamate you will fail, because if the substitute glutamate >> has the relevant functional properties (i.e. it binds to glutamate >> receptors and changes their conformation) then it will necessarily also >> replicate any role natural glutamate plays in consciousness. For if it were >> possible to make substitute glutamate that performed the same as natural >> glutamate but did not replicate natural glutamate's role in consciousness, >> then you could create a being lacking an aspect of consciousness (likely a >> very big aspect, since glutamate is so widespread in the brain) but >> behaving normally and believing that they feel normal. I have repeated the >> last sentence many times in many different ways but it doesn't seem to get >> through. Maybe it is because you think that the functional isomorph of >> glutamate would NOT necessarily result in normal behaviour? But then it >> wouldn't be a functional isomorph! Maybe you think the functional isomorph >> would result in normal behaviour but consciousness would still be altered? >> But then there would be a decoupling between consciousness and behaviour: >> the subject could be blind, or in terrible pain, and his mouth would of its >> own accord smile and make noises indicating that everything was fine! >> > > OK, thanks for pointing out that for functionalists (the target audience > of this paper) I've not been adequately addressing this issue, either here, > or in the paper. (working on fixing that) > > So the corollary to > > "if there is no detectable neural correlate to redness, > someone will not be accurately experiencing redness." > > is > > "You will not be able to accurately believe you are experiencing > redness, without the neural correlate." > > So, yes, the neural substitution will completely fail. And if it > succeeds, I will admit that materialist theories have been falsified. The > prediction is that nobody will be able to find a way to present to the > binding system anything that is not the neural correlate of redness, and > get a true redness experience. There are myriads of various possibilities, > such as, in the inverted quale case, someone will believe they know what > redness is like, when in reality they are just mistakenly thinking that the > greenness quale is redness. Or there is the lying through their teeth, > example, they know that 1 isn't really redness, nor is the 0 really > greenness. They are just knowingly lying when they say: "That is red, and > I know what it is qualitatively like." And, the prediction is that > observation systems like that being used by Gallant will be able to > objectively detect and prove exactly when all of this type of stuff is > going on. So, no, without real glutamate, or without the detectable > functional isomorph of redness is, whatever is causing them to accurately > think they are experiencing redness, will not be possible without it. > Otherwise the theory, which predicts you can't experience redness, without > the reliably detectable intrinsic functional isomorph of redness, with out > the real thing, will be falsified, or need to be adjusted. > > There must be something that is is a correct redness experience. When we > are aware of redness, we are detecting this redness, and we are able to > distinguish this from greenness. This must be true, regardless of whether > the relationship is functional, material, quantum, or whatever. And > whatever the brain is doing to do this detection that redness is > qualitatively different than greenness, must be objectively discoverable, > reproducible, mappable, and ultimately effable. > > Brent > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Wed Feb 18 19:21:38 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Wed, 18 Feb 2015 12:21:38 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54E48E7D.4040308@canonizer.com> Message-ID: Hi Tomaz, Are you interested in all, about the qualitative nature of other people? What about uploading, do you invasion that as something you will experience, sometime in the future? Do you have any interest in what that might include, and what it might be like? Brent On Wed, Feb 18, 2015 at 10:10 AM, Tomaz Kristan wrote: > Since people do talk about the consciousness this much ... I have a > theory, that they indeed are conscious. Just as I am. > > Had they weren't conscious, they wouldn't bother debating it so > passionately. > > On Wed, Feb 18, 2015 at 2:07 PM, Brent Allsop > wrote: > >> >> Hi Stathis, >> >> On 2/17/2015 4:38 PM, Stathis Papaioannou wrote: >> >>> On 18 February 2015 at 02:36, Brent Allsop >>> wrote: >>> >>>> Hi John, >>>> >>>> You keep saying: "You can't prove if something else is conscious." >>>> But, >>>> does your left brain hemisphere not know, more than we know anything, >>>> not >>>> only that your right hemisphere is it conscious, but what it is >>>> qualitatively like. And if that is possible, why are you assuming we >>>> can't >>>> do the same thing the corpus callosum is doing, between brains, not just >>>> between brain hemispheres? >>>> >>> That's an interesting and relevant point about the left and right >>> hemispheres of the brain. It does, however, illustrate that the only >>> way we can really know what it is like to experience something is to >>> become a part of the system that is doing the experiencing. I could >>> attempt to find out what it is like to be a bat by interfacing with a >>> bat's brain (and then the bat would also find out what it is like to >>> be me). An objection to this, however, is that I would not be finding >>> out what it is like to be a bat, but rather a bat-human hybrid, which >>> may be quite different. And there is no obvious way I can see to find >>> out what it is like to be a more alien system such as a thermostat, >>> for example. >>> >> >> You must have missed the section in the paper pointing out the difference >> between compost vs elemental quale. You are talking about compost qualia, >> here, and I completely agree with you. But effing an elemental redness >> quale is very different. And, once you can bridge the explanatory gap with >> elemental quale, more complex types of composite effing are just more >> complex "easy" variations on the theme. >> >> Well, to me it also seems that you are missing the obvious (and John >>> also, and he actually agrees with me!). The obvious is this: if you try to >>> make zombie glutamate you will fail, because if the substitute glutamate >>> has the relevant functional properties (i.e. it binds to glutamate >>> receptors and changes their conformation) then it will necessarily also >>> replicate any role natural glutamate plays in consciousness. For if it were >>> possible to make substitute glutamate that performed the same as natural >>> glutamate but did not replicate natural glutamate's role in consciousness, >>> then you could create a being lacking an aspect of consciousness (likely a >>> very big aspect, since glutamate is so widespread in the brain) but >>> behaving normally and believing that they feel normal. I have repeated the >>> last sentence many times in many different ways but it doesn't seem to get >>> through. Maybe it is because you think that the functional isomorph of >>> glutamate would NOT necessarily result in normal behaviour? But then it >>> wouldn't be a functional isomorph! Maybe you think the functional isomorph >>> would result in normal behaviour but consciousness would still be altered? >>> But then there would be a decoupling between consciousness and behaviour: >>> the subject could be blind, or in terrible pain, and his mouth would of its >>> own accord smile and make noises indicating that everything was fine! >>> >> >> OK, thanks for pointing out that for functionalists (the target audience >> of this paper) I've not been adequately addressing this issue, either here, >> or in the paper. (working on fixing that) >> >> So the corollary to >> >> "if there is no detectable neural correlate to redness, >> someone will not be accurately experiencing redness." >> >> is >> >> "You will not be able to accurately believe you are experiencing >> redness, without the neural correlate." >> >> So, yes, the neural substitution will completely fail. And if it >> succeeds, I will admit that materialist theories have been falsified. The >> prediction is that nobody will be able to find a way to present to the >> binding system anything that is not the neural correlate of redness, and >> get a true redness experience. There are myriads of various possibilities, >> such as, in the inverted quale case, someone will believe they know what >> redness is like, when in reality they are just mistakenly thinking that the >> greenness quale is redness. Or there is the lying through their teeth, >> example, they know that 1 isn't really redness, nor is the 0 really >> greenness. They are just knowingly lying when they say: "That is red, and >> I know what it is qualitatively like." And, the prediction is that >> observation systems like that being used by Gallant will be able to >> objectively detect and prove exactly when all of this type of stuff is >> going on. So, no, without real glutamate, or without the detectable >> functional isomorph of redness is, whatever is causing them to accurately >> think they are experiencing redness, will not be possible without it. >> Otherwise the theory, which predicts you can't experience redness, without >> the reliably detectable intrinsic functional isomorph of redness, with out >> the real thing, will be falsified, or need to be adjusted. >> >> There must be something that is is a correct redness experience. When we >> are aware of redness, we are detecting this redness, and we are able to >> distinguish this from greenness. This must be true, regardless of whether >> the relationship is functional, material, quantum, or whatever. And >> whatever the brain is doing to do this detection that redness is >> qualitatively different than greenness, must be objectively discoverable, >> reproducible, mappable, and ultimately effable. >> >> Brent >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > https://protokol2020.wordpress.com/ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Wed Feb 18 23:35:10 2015 From: johnkclark at gmail.com (John Clark) Date: Wed, 18 Feb 2015 18:35:10 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54E48E7D.4040308@canonizer.com> Message-ID: On Wed, Feb 18, 2015 Brent Allsop wrote: > What about uploading, do you invasion that as something you will > experience, sometime in the future? > If I am very lucky. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Thu Feb 19 00:07:15 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 18 Feb 2015 19:07:15 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Tue, Feb 17, 2015 at 12:22 PM, John Clark wrote: > > I don't need a proof to convince myself I'm conscious because I have > something much better than proof, direct experience. > ### This way goes solipsism. If you are not a solipsist, you believe that there is a world out there, with trees, animals and other objects that have an existence independent of you. Their existence and properties are inferred from sensory data, memories and inbuilt cognitive hardware that over your life build a more or less coherent model, which also includes a model of yourself. If you are conscious, and some of the objects say they are conscious, and you know their brains are similar to what your model of yourself is, then this information is sufficient to conclude that they are conscious, just as knowing that the Moon is a piece of rock is sufficient to know that it has hidden side, even if you never directly observe it. To say that consciousness of some of these objects in your world model is inherently unknowable is solipsistic - you demand special proof where the usual coherent model of the world should be sufficient. Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Wed Feb 18 23:31:47 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 18 Feb 2015 18:31:47 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> <54E01317.4080105@canonizer.com> Message-ID: On Sun, Feb 15, 2015 at 4:24 AM, Stathis Papaioannou wrote: > > > The idea is not to copy behaviour as such but to model the components. An > engineer can take a component from a machine, put it through a series of > tests, and make a replacement component from perhaps completely different > parts that, if done properly, should work just like the original when > installed - even if the exact way the machine works is unknown. ### Well, this is getting a bit tricky here - if you don't know what the machine is doing, how do you know your tests of its components capture the relevant features you need to put into your replacement part? If you feel that the brain is there to produce heat, you may end up making a 100 watt space heater and miss the point. The modeling procedure has to capture the informational content that is being manipulated by the machine and then you have to be able to separate the incidental physical aspects of the substrate that performs computations, to make physically different yet computationally sufficiently similar components. We are assuming that an uploading procedure will have to do just that, and I do think that it is feasible, and any upload that reasonably faithfully reprises the computational structure of the original will experience qualia. Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Feb 19 03:20:27 2015 From: johnkclark at gmail.com (John Clark) Date: Wed, 18 Feb 2015 22:20:27 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Wed, Feb 18, 2015 at 7:07 PM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: >> I don't need a proof to convince myself I'm conscious because I have >> something much better than proof, direct experience. >> > > > This way goes solipsism. > I don't think so but even if it did the fact remains that I know from direct experience that I am conscious, I need no theories or proofs to know that. I have a theory that you're conscious too but only when you behave intelligently, not when you're sleeping or under anesthesia or dead. > > If you are not a solipsist > I am not a solipsist for the same reason you're not, because I believe in the theory that if something behaves intelligently then it is conscious. And I'm not alone, every single person on this list believes in and uses this theory every minute of every day of their lives; ...well OK there is one exception, there is one time when some don't believe in the theory, when they're arguing philosophy on the internet. > you believe that there is a world out there, with trees, animals and > other objects that have an existence independent of you. Their existence > and properties are inferred from sensory data > Yes, but I don't need to do any inferring to figure out I'm conscious. > To say that consciousness of some of these objects in your world model is > inherently unknowable is solipsistic - you demand special proof where the > usual coherent model of the world should be sufficient. > What are you talking about? I specifically said I DON'T demand proof, the evidence for the theory that I'm not the only conscious being in the universe is so overwhelming that it's just not worth worrying about it being wrong. Nevertheless the fact remains I have direct experience with just one consciousness. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Thu Feb 19 00:43:35 2015 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 18 Feb 2015 19:43:35 -0500 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: References: <54E34832.4020507@libero.it> Message-ID: On Tue, Feb 17, 2015 at 9:56 AM, William Flynn Wallace wrote: > > In my younger days I hated it when a car was going faster than me. So I > sped up and passed them. Yes, I have gotten numerous tickets but it never > stopped me. Extremely irresponsible. > ### So you say you are a poor driver. And then you want to restrict other drivers, and take months of their lives from them at gunpoint, because restricting speeds amounts to a significant QALY loss, in my case about 6 months, since I drive a lot and obeying speed laws would force me to spend an additional 6 months in my car over my life. Libertarian? Rafa? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Feb 19 14:00:36 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 20 Feb 2015 01:00:36 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Thursday, February 19, 2015, Brent Allsop > wrote: > > Hi Stathis, > > It feels to me that the entire paper has come together, in a very > compelling way, except for the section at the end where I attempt to > address the neural substitution argument. I have a real hard time getting > my head arround the way a functionalist thinks. I know a bunch of the > stuff I end up saying is completely useless, and I know I can say the right > thing, but I just don't know how to put it in a way that will be > understood, as best as possible, to a functionalist. > Brent, I reread the paper and I have to say, whatever you've done to it, the latest version of it is much clearer. Good luck with the presentation at the conference. > The current section on the neural substitution argument at the end is just > a loose collection of ideas I'm tyring to put together. I wonder if you > could provide some feedback on if any of that is good, or a complete waste, > and so on. > > And can I get you, or anyone, to state the issue you have, in general, > with the idea of the paper. > > > https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit > > Thanks, > > Brent Allsop > I feel that the ideas expressed in the paper about effing the ineffable are not directly related to the question of whether functionalism is correct. I'm not sure why you want to include the section of neural substitution in the same paper. I also feel that you have missed the point of the neural substitution argument (or Chalmers' statement of it). I would suggest that you try thinking about the replacement in the first instance by forgetting about consciousness, zombies or information and considering only the mechanics of the system. To an alien scientist, a brain may be just a system of interacting organic parts. If he replaces glutamate with a functional analogue, all he is concerned about is that it have the same effect on glutamate receptors. If it does, then the subject will say "I see red, same as before". He MUST say this, because he said it with natural glutamate, and the glutamate substitute is performing the same role. Forgetting completely about qualia, zombies and so on for the moment and thinking only about chemistry, do you see that this must be so? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Thu Feb 19 14:29:53 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Thu, 19 Feb 2015 15:29:53 +0100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: Brent! I would like to upload myself, of course. Into a pleasant (virtual) environment. What if I die first? I was dead before. Before I was born. Still, I am here, uploaded into some biology. On Thu, Feb 19, 2015 at 3:00 PM, Stathis Papaioannou wrote: > > > On Thursday, February 19, 2015, Brent Allsop > wrote: > >> >> Hi Stathis, >> >> It feels to me that the entire paper has come together, in a very >> compelling way, except for the section at the end where I attempt to >> address the neural substitution argument. I have a real hard time getting >> my head arround the way a functionalist thinks. I know a bunch of the >> stuff I end up saying is completely useless, and I know I can say the right >> thing, but I just don't know how to put it in a way that will be >> understood, as best as possible, to a functionalist. >> > > Brent, I reread the paper and I have to say, whatever you've done to > it, the latest version of it is much clearer. Good luck with the > presentation at the conference. > > >> The current section on the neural substitution argument at the end is >> just a loose collection of ideas I'm tyring to put together. I wonder if >> you could provide some feedback on if any of that is good, or a complete >> waste, and so on. >> >> And can I get you, or anyone, to state the issue you have, in general, >> with the idea of the paper. >> >> >> https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit >> >> Thanks, >> >> Brent Allsop >> > > I feel that the ideas expressed in the paper about effing the ineffable > are not directly related to the question of whether functionalism is > correct. I'm not sure why you want to include the section of neural > substitution in the same paper. > > I also feel that you have missed the point of the neural substitution > argument (or Chalmers' statement of it). I would suggest that you try > thinking about the replacement in the first instance by forgetting about > consciousness, zombies or information and considering only the mechanics of > the system. To an alien scientist, a brain may be just a system of > interacting organic parts. If he replaces glutamate with a functional > analogue, all he is concerned about is that it have the same effect on > glutamate receptors. If it does, then the subject will say "I see red, same > as before". He MUST say this, because he said it with natural glutamate, > and the glutamate substitute is performing the same role. Forgetting > completely about qualia, zombies and so on for the moment and thinking only > about chemistry, do you see that this must be so? > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Feb 19 15:12:24 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 20 Feb 2015 02:12:24 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Thursday, February 19, 2015, John Clark wrote: > On Wed, Feb 18, 2015 Stathis Papaioannou > wrote: > > > Functions in the brain are, to an extent, localised. >> > > Memory doesn't seem to be localized, and there is no way to know, or at > least no way to prove, if consciousness is. > It is usually believed that the various types of experiences are localised to cortical regions. However, if it is diffusely spread over he brain the argument just needs to be modified slightly so that the replacement is of part of the putative consciousness-generating mechanism. > > if a part of the brain is damaged it results in specific deficits in >> function, while other functions are left unaffected. >> > > If region X of the brain is damaged we know that some behaviors change and > others do not, we can make educated guesses but we have no way of PROVING > if consciousness is destroyed or not. > > >> > So if the visual cortex is taken out the subject can't see, although he >> can speak normally >> > > And the exact same thing would happen if the eyeballs of the subject were > taken out. What does that teach you about the nature of consciousness? > Nothing as far as I can tell. > If the eyeballs are removed the subject can still remember and describe visual experiences while if the entire visual cortex is removed they can't. However, that isn't essential to the argument: the only assumption is that consciousness is due to something in the brain, and then we consider what happens if we partly replace that something with a non-conscious but otherwise normally functioning analogue. > > Now, what happens if you replace the visual cortex with a perfect >> functional analogue which, however, lacks the special "function" of >> consciousness? >> > > If it's the biological visual cortex that generates your consciousness and > it is removed and replaced by a electronic visual cortex that does > everything just as well as the biological version EXCEPT for generating > consciousness then a intelligent conscious being has been turned into a > intelligent zombie. > A partial zombie only, since only vision is affected. Consciousness consists of multiple modalities. Blind people are not necessarily zombies. > > Now I hope you can see that if EVERY FUNCTION THAT CAN POSSIBLY BE >> SCIENTIFICALLY TESTED FOR is incorporated into the artificial visual cortex >> then it will receive and process input and send output to the rest of the >> brain in the same way as the original visual cortex; >> > > Yes. > > >> > the subject will behave completely normally; >> > > Yes. > > >> > what would happen if there is a function that can't be scientifically >> tested for, responsible for visual perception (i.e. consciousness) in the >> cortex? >> > > They're not the same thing, visual perception can be tested for, > consciousness of what is perceived can not be. > I use the word "perception" as synonymous with qualia or experience. If a camera is not conscious then I would not say that a camera perceives light. > > That function would be left out and the subject would be blind >> > > If a being responds to light in it's environment then it may or may not be > conscious, but it is certainly not blind. > Again, I use the word "blind" to mean lacking in visual qualia. A subject with a damaged visual cortex responds to light to an extent, since the pupils constrict when a light is shone in the eyes, but he does not have any visual perception when this happens. > > but because the artificial visual cortex is sending all the right >> signals to his speech centres, and every other part of his brain, he >> doesn't realise he is blind >> > > If the biological visual cortex is what generates consciousness and it has > been removed then he doesn't realize ANYTHING, he's a zombie. He could > still be intelligent witty charming and sexy but he would have no more > consciousness than a brick. > You seem to be assuming that there is a single all modalities consciousness mechanism in the brain. What would happen if you took out just a part of this mechanism? > > and he still declares that he can see normally. >> > > Yes because his behavior is unaffected, he said "I can see normally" > before the operation so he'd say the same thing after it. > > > it would be possible to remove a major aspect of a person's >> consciousness, such as visual perception, but they would behave normally >> and they would not notice that anything had changed. > > > Their bodies would behave just as it always did but they wouldn't notice > anything, they're zombies. > > >> > So do you see the problem with this? >> > > Yes, the theory that intelligent behavior and consciousness can be > separated can't be proven wrong and will never be proven wrong, so the idea > is silly, very very silly. > > John K Clark > > > > > >> > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Thu Feb 19 21:32:58 2015 From: johnkclark at gmail.com (John Clark) Date: Thu, 19 Feb 2015 16:32:58 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Thu, Feb 19, 2015 at 10:12 AM, Stathis Papaioannou wrote: > If the eyeballs are removed the subject can still remember and describe > visual experiences while if the entire visual cortex is removed they can't. I don't believe that's true, one of the symptoms of cortical blindness is VISUAL hallucinations. And there are other interesting symptoms, sometimes a person is blind as a bat but insists he can see just fine, and sometimes it's just the opposite and blindsight happens, he insists that he's blind but can nimbly maneuver through a obstacle course without error, when the patient is informed about his success he insists that he must have just gotten lucky because he's blind. For a reductio ad absurdum it's not enough for the conclusion to be absurd, it must also be false. > > the only assumption is that consciousness is due to something in the > brain > OK, but you have no way of proving what that "something" is, and this isn't just because of technological limitations, you have no way of proving it even in theory. A proof isn't worth much if one of the steps in it requires you to perform a impossible task. > > and then we consider what happens if we partly replace that something > with a non-conscious but otherwise normally functioning analogue. > And since you don't know what that "something" is you have no way of knowing if that replacement part has it or not. So maybe it's conscious and maybe it's not. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Feb 19 22:02:52 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 20 Feb 2015 09:02:52 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On 20 February 2015 at 08:32, John Clark wrote: >> > the only assumption is that consciousness is due to something in the >> > brain > > > OK, but you have no way of proving what that "something" is, and this isn't > just because of technological limitations, you have no way of proving it > even in theory. A proof isn't worth much if one of the steps in it requires > you to perform a impossible task. You don't need to know what it is. If it is something in the brain, then you systematically replace every part of the brain and eventually you will get to it. >> > and then we consider what happens if we partly replace that something >> > with a non-conscious but otherwise normally functioning analogue. > > > And since you don't know what that "something" is you have no way of knowing > if that replacement part has it or not. So maybe it's conscious and maybe > it's not. But you will get to it eventually, even in your ignorance. What could possibly happen happen if the something is partly replaced? -- Stathis Papaioannou From johnkclark at gmail.com Fri Feb 20 17:58:03 2015 From: johnkclark at gmail.com (John Clark) Date: Fri, 20 Feb 2015 12:58:03 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Thu, Feb 19, 2015 at 5:02 PM, Stathis Papaioannou wrote: > >> OK, but you have no way of proving what that "something" is, and this >> isn't >> >> just because of technological limitations, you have no way of proving >> it >> >> even in theory. A proof isn't worth much if one of the steps in it >> requires >> >> you to perform a impossible task. > > > >You don't need to know what it is. If it is something in the brain, > then you systematically replace every part of the brain and eventually > you will get to it. > You will get to consciousness eventually for certain if every replacement part gives the same output for every given input that you can measure in your lab AND if every theoretical logically possible input and output that no lab could ever measure is also identical; so the only way you can be sure you've covered all your bases, even the ridiculously unlikely ones, is if the replacement is not just functionally identical but identical in every way. But if all the replacement parts are identical in every way then you haven't really done anything, you haven't really replaced anything, you had a biological brain before and you have a identical biological brain after you've done all your "replacing". You've proven nothing except that X=X. John K Clark > > >> > and then we consider what happens if we partly replace that something > >> > with a non-conscious but otherwise normally functioning analogue. > > > > > > And since you don't know what that "something" is you have no way of > knowing > > if that replacement part has it or not. So maybe it's conscious and maybe > > it's not. > > But you will get to it eventually, even in your ignorance. What could > possibly happen happen if the something is partly replaced? > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Thu Feb 19 06:18:40 2015 From: avant at sollegro.com (Stuart LaForge) Date: Thu, 19 Feb 2015 06:18:40 +0000 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: <20150219061840.Horde.xZ4zPD_UeBR6uytgVDnkPQ1@secure88.inmotionhosting.com> Quoting John Clark: > > Message: 2 > Date: Mon, 16 Feb 2015 11:09:55 -0500 > From: John Clark > To: ExI chat list > Subject: Re: [ExI] Zombie glutamate > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Sat, Feb 14, 2015 at 4:44 PM, wrote: > >> While Brent is not completely wrong because susbtrates do have very >> specific structures that enable their function, the structural >> considerations outweigh the simple identity of the substrate. For example >> a hemoglobin molecule denatured by heat would still chemically be >> hemoglobin, but it will have lost its delicate folded structure and >> thereby all of its biological function. >> > > Denatured hemoglobin chemically reacts very differently than non-denatured > hemoglobin does, and the logical structure of a brain fed by denatured > hemoglobin would be quite different from your brain, the neurons would > respond to signals differently because they were dead, killed by lack of > oxygen. But if done competently they logical schematic of your uploaded > brain in a electronic computer would be identical to the logical schematic > of your biological brain. I might be in the minority on the list, but I think you underestimate the Kolmogorov complexity and information content of the brain by orders of magnitude. All schematics are, by necessity, simplifications of the real deal. Have you ever driven the schematics of a car? Yeah sure, you probably played a racing video game, but do you think the developers programmed your virtual car to have virtual cylinders burning virtual fuel? Now you want to talk about simulating your brain. How do you know something as crucial as your imagination, for an arbitrary example, is like the spark plugs of the virtual race car which is *not* simulated. Would you as an upload even be aware that that component of your mind was missing? > > >>> If you want to simulate the mind, you would have to >> simulate the human brain from the atoms up along with any attendant >> chemistry and physics. You might even have to simulate the rest of the >> body as well, after all, I wouldn't feel quite like myself without my >> adrenal glands or my testicles subtly influencing my thinking. > > I see nothing sacred in hormones, I don't see the slightest reason why they > or any neurotransmitter would be especially difficult to simulate through > computation, because chemical messengers are not a sign of sophisticated > design on nature's part, rather it's an example of Evolution's bungling. If > you need to inhibit a nearby neuron there are better ways of sending that > signal then launching a GABA molecule like a message in a bottle thrown > into the sea and waiting ages for it to diffuse to its random target. I never claimed hormones are sacred but they do have utility. Your example shows you don't understand how chemical messengers like hormones function. The target is not random at at all but actually the 40% or so of neurons that are expressing the GABA receptor at any one time. And each one of those neurons can modulate the expression levels of GABA receptor based on the reaction of those cells to the concentrations various chemical messengers including GABA itself. If GABA is a message in a bottle, it is a bottle that only the intended recipients can open. But really the bottle itself *is* the message, and it only means anything to a select subset of neurons that are receptive. Those neurons intentionally express the genes to be receptive. > I'm not interested in chemicals only the information they contain, I want > the information to get transmitted from cell to cell by the best method and > few would send smoke signals if they had a fiber optic cable. > The information content in each molecular message must be tiny, just a > few bits because only about 60 neurotransmitters such as > acetylcholine, norepinephrine and GABA are known, even if the true number > is 100 times greater (or a million times for that matter) the information > content of > each signal must be tiny. Also, for the long range stuff, exactly which > neuron receives the signal can not be specified because it relies on a > random process, diffusion. The fact that it's slow as molasses in February > does not add to its charm. I think you and Tomaz both severely underestimate the complexity of living systems. Adding together the Beckenstein limits, calculated using atomic masses and covalent radii of the individual atoms in GABA, yields an information storage capacity of approximately 35 MB and that's just for one molecule of GABA. Of course Beckestein bounds are upper limits, so the relevant information content is likely much lower. But the relevance of information is context dependent. So even spam can be relevant to those who generate it. So just how much of the maximal information capacity of our brain (approximately 10^41 bytes) nature uses to generate John Clark is a tough open question. Oooh I just had an insight on how might be able to estimate but I don't have time now. > If your job is delivering packages and all the packages are very small and > your boss doesn't care who you give them to as long as it's on the correct > continent and you have until the next ice age to get the work done, then > you don't have a very difficult profession. I see no reason why simulating > that anachronism would present the slightest difficulty. Artificial > neurons could be made to release neurotransmitters as inefficiently as > natural ones if anybody really wanted to, but it would be pointless when > there are much faster ways. Again, you don't understand how cell signaling functions. These signals are not indiscriminate at all but instead highly specific and targeted at the cells that have the appropriate receptors. Furthermore, the messenger *is* the package and its function is encoded by its shape. It is analogous to the copies of a key that only open certain locks. These keys can only initiate a response from cells that have the correct lock. Yes simulations certainly could eventually get much faster but the first few are liable to be much slower. The difficulty in simulating signaling molecules and other biochemicals is that the all important shape of the molecules is determined by the distribution of electron densities, and thereby electric charge, over the molecules. The distribution of these electrons are a quantum mechanical phenomenon and as the Beckenstein bound example I gave above illustrates, quantum mechanics is an information processing rabbit hole that has the potential to go quite deep. > The great strength biology has over present day electronics is in the > ability of one neuron to make thousands of connections of various strengths > with other neurons. However, I see absolutely nothing in the fundamental > laws of physics that prevents nano machines from doing the same thing, or > better and MUCH faster. But biomolecules *are* nano machines that collaborate to replicate and they have had billions of years to optimize themselves. We are just now getting started. If general AI was easy, everybody would be doing it. And the Fermi paradox could imply that nobody is doing it because it is very hard. Stuart LaForge From stathisp at gmail.com Fri Feb 20 23:47:22 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 21 Feb 2015 10:47:22 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Saturday, February 21, 2015, John Clark wrote: > > On Thu, Feb 19, 2015 at 5:02 PM, Stathis Papaioannou > wrote: > >> > >> OK, but you have no way of proving what that "something" is, and this >>> isn't >>> >> just because of technological limitations, you have no way of proving >>> it >>> >> even in theory. A proof isn't worth much if one of the steps in it >>> requires >>> >> you to perform a impossible task. >> >> >> >You don't need to know what it is. If it is something in the brain, >> then you systematically replace every part of the brain and eventually >> you will get to it. >> > > You will get to consciousness eventually for certain if every replacement > part gives the same output for every given input that you can measure in > your lab AND if every theoretical logically possible input and output that > no lab could ever measure is also identical; so the only way you can be > sure you've covered all your bases, even the ridiculously unlikely ones, is > if the replacement is not just functionally identical but identical in > every way. But if all the replacement parts are identical in every way then > you haven't really done anything, you haven't really replaced anything, you > had a biological brain before and you have a identical biological brain > after you've done all your "replacing". You've proven nothing except that > X=X. > > John K Clark > If the replacement part perfectly copies the observable I/O behaviour of the original part, then consciousness will necessarily also be copied. That it would be technically difficult to do, or even physically impossible, does not weaken the argument: consciousness is a necessary side-effect of behaviour, and not something without causal efficacy of its own that can be tacked on optionally. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Feb 21 04:52:08 2015 From: johnkclark at gmail.com (John Clark) Date: Fri, 20 Feb 2015 23:52:08 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Fri, Feb 20, 2015 at 6:47 PM, Stathis Papaioannou wrote: > If the replacement part perfectly copies the observable I/O behaviour of > the original part, then consciousness will necessarily also be copied. > [...] consciousness is a necessary side-effect of behaviour > I think that's true, I'd bet my life that's true, but I'll never be able to prove it and neither can you. John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Feb 21 06:04:03 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 21 Feb 2015 01:04:03 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: Stuart LaForge wrote: > > If you want to simulate the mind, you would have to simulate the human > brain from the atoms up Why? Sounds like a astronomical waste of time. > >> I see nothing sacred in hormones, I don't see the slightest reason why >> they or any neurotransmitter would be especially difficult to simulate >> through computation, because chemical messengers are not a sign of >> sophisticated design on nature's part, rather it's an example of >> Evolution's bungling. If you need to inhibit a nearby neuron there are >> better ways of sending that signal then launching a GABA molecule like a >> message in a bottle thrown into the sea and waiting ages for it to diffuse >> to its random target. > > > Your example shows you don't understand how chemical messengers like > hormones function. The target is not random at at all Hormones get to their target by a process of diffusion, if you know of a process more random than that I'd like to hear about it. > If GABA is a message in a bottle It is. > it is a bottle that only the intended recipients can open. And the speed of the message is SLOW and contains less than one byte of information. I'm sorry but hormones just do not impress me with their telecommunication sophistication. > Adding together the Beckenstein limits, calculated using atomic masses > and covalent radii of the individual atoms in GABA, What the hell? Why on Earth would anybody want to do that?? We're talking about chemistry here not Black Holes! > > yields an information storage capacity of approximately 35 MB and that's > just for one molecule of GABA. So 2 GABA molecules provides 70 MB of information that my brain can use? The insulation on the wires of the power supply in my computer contains rubber molecules and they have just as much information as a GABA molecules, can my computer make use of that? > So just how much of the maximal information capacity of our brain > (approximately 10^41 bytes) That estimate is just a tad too high, I'd say about ten billion billion billion times too high, there are only 10^11 neurons and the 10^14 synapses in the entire brain. Actually it's probably even worse than that, there is massive redundancy in the brain and good evidence that one synapse contains less than one bit of information. > >> If your job is delivering packages and all the packages are very small >> and your boss doesn't care who you give them to as long as it's on the >> correct continent and you have until the next ice age to get the work done, >> then you don't have a very difficult profession. I see no reason why >> simulating that anachronism would present the slightest difficulty. >> Artificial neurons could be made to release neurotransmitters as >> inefficiently as natural ones if anybody really wanted to, but it would be >> pointless when there are much faster ways > > > Again, you don't understand how cell signaling functions. These signals > are not indiscriminate at all but instead highly specific and targeted By "highly specific and targeted" you mean there is only one place the brain can make use of the hormone, so it's rather sad that the poor hormone must bump into billion of trillions of places that have no use for it until eventually by the laws of diffusion and blind chance it eventually stumbles into the place it's looking for. > The distribution of these electrons are a quantum mechanical phenomenon > and as the Beckenstein bound example To hell with the Beckenstein bound, it's utterly irrelevant for biology! But if you insist on using it to show how marvelous your brain is then I can use it to show how marvelous my computer is, and some computers have a much larger surface area and thus much more Beckenstein information than the brain. If you want to play that silly game you will lose. > But biomolecules *are* nano machines True, very very primitive nano machines. > And the Fermi paradox could imply that nobody is doing it because it is > very hard. So random mutation and natural selection can figure it out but intelligence can't? Doesn't seem likely to me. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sat Feb 21 14:33:03 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 21 Feb 2015 07:33:03 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: <54E8971F.7090505@canonizer.com> Hi John, That inability to test for or prove/falsify your claim is what makes it useless philosophies of men. The way you progress from mere philosophy to true theoreticial physical science, is provide a way to for the experimentalist to prove/falsify your claims to everyone, forcing everyone into the same scientific consensus camp. You obviously don't yet understand the theoretical physical science the "Detecting Qualia" paper is describing. https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit *Shawn Colvin sings the popular song ?Never Saw Blue Like That? (see https://www.youtube.com/watch?v=sptmDFuzpTA). * Currently you are a new blue zombie. You can sing about it, but you don't yet know what it is qualitatively like. There is something that is responsible for what she is signing about. The paper shows how to overcome the quale interpretation problem so you can experimentally discover and detect what that is. The next step is to then reproduce that in your mind, as described in the paper, so you will no longer be a new blue zombie. When that is done, how will that not prove, what you claim can't be proven? Brent Allsop On 2/20/2015 9:52 PM, John Clark wrote: > > > On Fri, Feb 20, 2015 at 6:47 PM, Stathis Papaioannou > > wrote: > > > > If the replacement part perfectly copies the observable I/O behaviour of the > original part, then consciousness will necessarily also be copied. > [...] consciousness is a necessary side-effect of behaviour > > > I think that's true, I'd bet my life that's true, but I'll never be > able to prove it and neither can you. > > John K Clark > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From Carsten.Zander at t-online.de Sat Feb 21 14:51:07 2015 From: Carsten.Zander at t-online.de (Carsten Zander) Date: Sat, 21 Feb 2015 15:51:07 +0100 Subject: [ExI] The Robot Big Bang Message-ID: <54E89B5B.9020405@t-online.de> Hi, we should give the thing a name! Memorable terms are important to illustrate important things to the people. A suggestion: We should establish the term "ROBOT BIG BANG" The Robot Big Bang will happen years BEFORE the Singularity occurs. This could happen (x = robots): 2015 xx 2016 xxx 2017 xxxx 2018 xxxxx 2019 xxxxxx 2020 xxxxxxx 2021 xxxxxxxxxx 2022 xxxxxxxxxxxx 2023 xxxxxxxxxxxxxxx 2024 xxxxxxxxxxxxxxxxxxxxx 2025 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Robot Big Bang .. .. 2035 Singularity (years later) The Robot Big Bang will have the following characteristics: - Robots will become very cheap and will spread rapidly around the world. - All simple activities can performed by robots. - Most people will lose their jobs to robots. - All people will need a basic income. - Robots themselves will be produced by robots. - Robots will transmit their skills, knowledge and abilities to other robots. - All people will be able to produce most things on their own with the help of robots and 3-D printers (3-D printers are like robots). etc. https://www.change.org/petitions/united-nations-g-20-the-robots-are-coming-please-ensure-all-people-a-basic-income https://www.facebook.com/RobotsAndBasicIncome Carsten From stathisp at gmail.com Sat Feb 21 15:22:30 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 22 Feb 2015 02:22:30 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Saturday, February 21, 2015, John Clark > wrote: > > > On Fri, Feb 20, 2015 at 6:47 PM, Stathis Papaioannou > wrote: > > > > If the replacement part perfectly copies the observable I/O behaviour of >> the original part, then consciousness will necessarily also be copied. >> [...] consciousness is a necessary side-effect of behaviour >> > > I think that's true, I'd bet my life that's true, but I'll never be able > to prove it and neither can you. > You more or less agreed in your previous post with what you just quoted from me above - except you said that perfect copying of the I/O behaviour could only be done by replacing like for like, which would be no replacement (there are some who claim that even replacing like for like would not reproduce consciousness, but let's not get into that). But in a thought experiment it is enough to consider the case where a volume of neural tissue is replaced by a black box which has the appropriate I/O behaviour. Do you accept that such a black box would necessarily preserve consciousness, or can you conceive of a way of partly replacing the putative consciousness-generating apparatus without the subject or an observer noticing, respectively, a gross change in consciousness or behaviour? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Feb 21 15:43:56 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 21 Feb 2015 10:43:56 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: <54E8971F.7090505@canonizer.com> References: <54E8971F.7090505@canonizer.com> Message-ID: Hi Brent > That inability to test for or prove/falsify your claim is what makes it > useless philosophies of men. > Many philosophies have little utility other than entertainment. And we don't need to prove something with mathematical precision to conclude that the possibility of it being true is so high that it's not worth the wear and tear on valuable brain cells worrying about it being wrong, particularly when there is zero chance of ever obtaining a rigorous proof. In the real world we can and often do act even if we're only 51% certain that it's the right thing to do, and my certainty is much greater than 51% that consciousness is fundamental and a byproduct of intelligence; and being fundamental after saying that consciousness is the way data feels like when it is being processed there is simply nothing more to be said about the matter. The fact that despite everybody theorizing about it on the internet nobody, including you, has discovered anything new and fundamental about consciousness reinforces my view that there is nothing new and fundamental to be found. > > You obviously don't yet understand the theoretical physical science the > "Detecting Qualia" paper is describing. > With all due respect Brent, if I don't understand what's new and enlightening in your paper it's because either you don't understand your own paper or because you don't understand what's been known for centuries. I mean, is the idea that the sensation of redness and light with a 620 nm wavelength are not the same thing really supposed to a revolutionary discovery? > > The paper shows how to overcome the quale interpretation problem so you > can experimentally discover and detect what that is. > That would be the greatest scientific and philosophical discovery in the history of the world, a discovery I would not have thought possible; but you must be talking about some other paper because it sure as hell wasn't in the paper I read. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sat Feb 21 16:14:30 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 21 Feb 2015 11:14:30 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sat, Feb 21, 2015 at 10:22 AM, Stathis Papaioannou wrote: > there are some who claim that even replacing like for like would not > reproduce consciousness In this case there is experimental proof that idea is false, the atoms in our brains are constantly being replaced and yet consciously we feel like the same person, or at least I do. > in a thought experiment it is enough to consider the case where a volume > of neural tissue is replaced by a black box which has the appropriate I/O > behaviour. Do you accept that such a black box would necessarily preserve > consciousness, > Certainly I agree with that, I'd bet my life it's true, but I can not provide a proof that a mathematician would accept. > > or can you conceive of a way of partly replacing the > putative consciousness-generating apparatus without the subject or an > observer noticing, respectively, a gross change in consciousness or > behaviour? > Behavior would certainly be the same and I think it's only a matter of time before that is demonstrated experimentally, and I am virtually certain consciousness would be unaffected too however that will never be demonstrated experimentally. The difference is that in the replacement part all the inputs and outputs *that you can measure in the lab* are identical but although I think it's ridiculous it is logically possible that there are other inputs and outputs that neither your lab nor any lab will ever be able to measure. But even the lifetime of the universe is far too short to worry about this very small residual uncertainty. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sat Feb 21 18:02:47 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 21 Feb 2015 11:02:47 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54E8971F.7090505@canonizer.com> Message-ID: <54E8C847.5070705@canonizer.com> On 2/21/2015 8:43 AM, John Clark wrote: > is the idea that the sensation of redness and light with a 620 nm > wavelength are not the same thing really supposed to a revolutionary > discovery? That's just the first step. Gallant is already reading our minds, and could have already discovered what is responsible for a redness quality, he is just currently mapping what he is detecting, back to the intrinsic properties of the initial cause of the perception process - the strawberry, resulting in the quale interpretation problem. This method is what is making him blind to the qualitative nature of anything he may be detecting (as illustrated in the 4 figures in the paper). In addition to realizing that 620 nm light are different then redness, you have to corresponding map the zombie information describing what you have detected, back to the intrinsic quality of the knowledge, not back to the strawberry. So in addition to realizing they are different, you simply have to interpret things in the correct way, so you can properly interpret the correct qualitative nature of what you are detecting (as portrayed in the final Fig 4): https://docs.google.com/document/d/1Vxfbgfm8XIqkmC5Vus7wBb982JMOA8XMrTZQ4smkiyI/edit > > The paper shows how to overcome the quale interpretation problem > so you can experimentally discover and detect what [is resopnsible > for new blueness]. > > > That would be the greatest scientific and philosophical discovery in > the history of the world, I agree. > a discovery I would not have thought possible; but you must be talking > about some other paper because it sure as hell wasn't in the paper I > read. > On 2/19/2015 7:00 AM, Stathis Papaioannou wrote: > Brent, I reread the paper and I have to say, whatever you've done to > it, the latest version of it is much clearer. Good luck with the > presentation at the conference. > At least Stathis never gave up and I've finally been able to communicate to him. Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sat Feb 21 18:08:36 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 21 Feb 2015 11:08:36 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54E8971F.7090505@canonizer.com> Message-ID: <54E8C9A4.6080105@canonizer.com> On 2/21/2015 8:43 AM, John Clark wrote: > but you must be talking about some other paper because it sure as hell > wasn't in the paper I read. I remember responding in exactly the same way, after reading a paper by Steven Lehar, for the first time. Then he explained it to me, basically saying it's in there, you just missed it. So I want back and re-read it, and my life, and the world, has never been the same, since. Brent Allsop From stathisp at gmail.com Sat Feb 21 21:41:13 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 22 Feb 2015 08:41:13 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sunday, February 22, 2015, John Clark wrote: Behavior would certainly be the same and I think it's only a matter of time > before that is demonstrated experimentally, and I am virtually certain > consciousness would be unaffected too however that will never be > demonstrated experimentally. The difference is that in the replacement part > all the inputs and outputs *that you can measure in the lab* are identical > but although I think it's ridiculous it is logically possible that there > are other inputs and outputs that neither your lab nor any lab will ever be > able to measure. > If all the measurable inputs and outputs are replicated, how is it possible that your brain can notice that there has been a change in consciousness as a result of the replacement? If there is any effect from what you say are the logically possible non-measurable inputs and outputs then they are in fact measurable, which is contradictory. > But even the lifetime of the universe is far too short to worry about this > very small residual uncertainty. > > John K Clark > > > > > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From connor_flexman at brown.edu Sat Feb 21 18:40:05 2015 From: connor_flexman at brown.edu (Flexman, Connor) Date: Sat, 21 Feb 2015 13:40:05 -0500 Subject: [ExI] The Robot Big Bang In-Reply-To: <54E89B5B.9020405@t-online.de> References: <54E89B5B.9020405@t-online.de> Message-ID: On Sat, Feb 21, 2015 at 9:51 AM, Carsten Zander wrote: > > we should give the thing a name! > Memorable terms are important to illustrate important things to the people. Robotics Explosion Robotics Diaspora Robotics Surge Robotic Prevalence I personally don't like Big Bang being used for other things, because it sounds a little pandering and isn't at all the correct metaphor. I also prefer Robotics to Robot because: 1. Robotics emphasizes they don't have to be automatons with two legs and arms (like the popular imagination) 2. "Robot ____" has been sullied by the "Robot Uprising" people and everyone I talk to about AGI risk has a visceral reaction against anything that sounds like this Either of these preferences are flexible though, and I'm not trying to shoot anything down. I also think some more famous person will probably coin the actual term that gets spread around, but perhaps we could influence it. Lastly, among ourselves: I think while it may be helpful to have a term that we use for general robotic prevalence coming about quickly, we should remember that it isn't something that happens at a specified time. The Singularity has the potential to happen in a timespan of months or a few years if AI has a hard takeoff, but robotics are almost guaranteed to undergo typical growth where no year suddenly sees an unimaginable explosion. It is a bit of a tradeoff between useful heuristics and truth, whether we even choose to name this at all, but I do lean somewhat toward truth and not building this growth up like it will somehow be a spectacular Event like people view the Singularity. For this reason I prefer Robotic Prevalence instead of Robotics Explosion (too much like Intelligence Explosion) or Robotics Big Bang. Connor -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Sat Feb 21 23:54:30 2015 From: anders at aleph.se (Anders Sandberg) Date: Sun, 22 Feb 2015 00:54:30 +0100 Subject: [ExI] The Robot Big Bang In-Reply-To: <54E89B5B.9020405@t-online.de> Message-ID: <2927890462-17738@secure.ericade.net> Carsten Zander??, 21/2/2015 4:12 PM:We should establish the term "ROBOT BIG BANG"? ... The Robot Big Bang will have the following characteristics:? - Robots will become very cheap and will spread rapidly around the world.? - All simple activities can performed by robots.? - Most people will lose their jobs to robots.? - All people will need a basic income.? - Robots themselves will be produced by robots.? - Robots will transmit their skills, knowledge and abilities to other? robots.? - All people will be able to produce most things on their own with the? help of robots and 3-D printers (3-D printers are like robots).? Hmm... why do you think all these things have to come to be together? Some are likely causal: cheap robots that can do most activities would likely lead to a lot of lost jobs - but note that "simple activities" does not cover most jobs actually done. Robot production does not imply micromanufacturing, and so on.? Still, the population of robots does increase rather rapidly. http://earlywarn.blogspot.co.uk/2012/04/global-robot-population.html However, the data it builds on is pretty chunky: http://www.worldrobotics.org/uploads/tx_zeifr/Charts_IFR_PR__04_June_2014.pdf At least the industrial robotics market happens in fits and starts, making exponential extrapolations risky. Figure 3 of? http://www.unece.org/press/pr2001/01stat10e.html also shows that quality adjusted robot prices have been declining since the 90s while labour costs have been increasing - but since the task abilities are so different, it is pretty hard to pinpoint a time when the curves will actually cross for real in generic tasks.? So it seems reasonable to expect a near future point with more robots than humans (but we already have more microprocessors than humans). A more important one would be when the cost (weighted by the job skill pool) of robots becomes smaller than labour costs - but it will be a rather messy thing to measure since the job skill pool also changes as a response. So I expect the robot big bang to make sense from a sufficient distance just like the industrial revolution, but close up there is little to see.? Understanding these complications and that there likely is a big automation shift matters. As does explaining it properly to decisionmakers. I am a bit worried that right now it turns into a simplistic "The robots are coming, so we need basic income", which means some politicians will immediately accept or dismiss it depending on their views of basic income, and hence deduce that robots are either a problem or not a problem... Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Feb 22 01:26:27 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 21 Feb 2015 20:26:27 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sat, Feb 21, 2015 Stathis Papaioannou wrote: > If all the measurable inputs and outputs are replicated, how is it > possible that your brain can notice that there has been a change in > consciousness as a result of the replacement? > It's possible, although not probable, because consciousness is not measurable. > If there is any effect from what you say are the logically possible > non-measurable inputs and outputs then they are in fact measurable, > which is contradictory. No, that doesn't follow. Consciousness is a effect and I've never met anybody who claimed it doesn't exist, and yet it is not measurable. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Feb 22 03:05:30 2015 From: johnkclark at gmail.com (John Clark) Date: Sat, 21 Feb 2015 22:05:30 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: <20150219061840.Horde.xZ4zPD_UeBR6uytgVDnkPQ1@secure88.inmotionhosting.com> References: <20150219061840.Horde.xZ4zPD_UeBR6uytgVDnkPQ1@secure88.inmotionhosting.com> Message-ID: On Thu, Feb 19, 2015 1:Stuart LaForge wrote: > The difficulty in simulating signaling molecules and other biochemicals > is that the all important shape The shape is all important in biology but if its function is translated into another medium such as electronics then the shape of the original molecule is utterly irrelevant and the only important thing is the bit of information it carries. > > of the molecules is determined by the distribution of electron > densities, and thereby electric charge, over the molecules. The > distribution of these electrons are a quantum mechanical phenomenon and as > the Beckenstein bound Oh no are we really going back to the Beckenstein bound, something that virtually no biologist thinks is of the slightest importance? Very well if you want to play that silly game, my iMac has a larger surface area than your brain brain therefore according to Beckenstein it contains more information than more information than your brain. QED Yes it's a silly game but you're the one who wanted to play. > > All schematics are, by necessity, simplifications of the real deal. Yes, a good simplification gets rid of pointless wheels within wheels and gets to the essentials. > > Have you ever driven the schematics of a car? No the schematics alone won't get me to work because a car is a noun, and brick a noun too so a can't build a house I can live in with the simulation of a brick, but when you're talking about information things are very different. My calculator does real arithmetic not simulated arithmetic and my iPod plays real music not simulated music. So the question you have to ask yourself is are you more like a symphony or more like a brick? > >Yeah sure, you probably played a racing video game, but do you think the > developers programmed your virtual car to have virtual cylinders burning > virtual fuel? A simulated flame is certainly not identical to a real flame but to say it has absolutely no reality can lead to problems. Suppose you say that for a fire to be real it must have some immaterial essence of fire, a sort of "burning" soul, thus a simulated flame does not really burn because it just changes the pattern in a computer memory. The trouble is, using the same reasoning you could say that a real fire doesn't really burn, it just oxidizes chemicals; but really a flame can't even do that, it just obeys the laws of chemistry. If we continue with this we soon reach a point where nothing is real but the fundamental laws of physics, I don't think either of us want's to embrace that position. I think a simulated flame is real at one level but care must be taken not to confuse levels. A simulated flame won't burn your computer but it will burn a simulated object. A real flame won't burn the laws of chemistry but it will burn your finger. > Would you as an upload even be aware that that component of your mind was > missing? > If it's important to me I'd notice, if it's not I don't care if it's missing. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 22 06:17:16 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 22 Feb 2015 17:17:16 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sunday, February 22, 2015, John Clark wrote: > On Sat, Feb 21, 2015 Stathis Papaioannou > wrote: > > > If all the measurable inputs and outputs are replicated, how is it >> possible that your brain can notice that there has been a change in >> consciousness as a result of the replacement? >> > > It's possible, although not probable, because consciousness is not > measurable. > Suppose there is a gross change in your consciousness as a result of the replacement. If you are conscious then you should notice and be able to report the change. But the replacement results in normal output to all of your brain, including the part that would notice a change and then send output to your speech centre. So if there is a change in your consciousness, the change will either go unnoticed or, if noticed, you will be unable to report it and will have to watch helplessly as your vocal cords say everything is normal. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Feb 22 10:33:23 2015 From: pharos at gmail.com (BillK) Date: Sun, 22 Feb 2015 10:33:23 +0000 Subject: [ExI] The Robot Big Bang In-Reply-To: <2927890462-17738@secure.ericade.net> References: <54E89B5B.9020405@t-online.de> <2927890462-17738@secure.ericade.net> Message-ID: On 21 February 2015 at 23:54, Anders Sandberg wrote: > Understanding these complications and that there likely is a big automation > shift matters. As does explaining it properly to decisionmakers. I am a bit > worried that right now it turns into a simplistic "The robots are coming, so > we need basic income", which means some politicians will immediately accept > or dismiss it depending on their views of basic income, and hence deduce > that robots are either a problem or not a problem... > A starving population is a political problem. Note the millions receiving food stamps (and the millions in prison) in the USA and the desperate attempts in the UK to try and reduce welfare costs. Giving people low paid, zero hours contract jobs is a stop-gap that enables people to survive, but they won't settle for a life like that. As more and more jobs are gradually automated (think ATMs, robot bartenders, etc.), then gradually more and more people will move to 'make-work' jobs. It seems obvious that there will be a big change in the way society is organised. Basic income is one possibility, but you have to keep humans occupied. Remember that one thing humans are really good at is fighting. BillK From anders at aleph.se Sun Feb 22 14:57:27 2015 From: anders at aleph.se (Anders Sandberg) Date: Sun, 22 Feb 2015 15:57:27 +0100 Subject: [ExI] The Robot Big Bang In-Reply-To: Message-ID: <2980987749-16930@secure.ericade.net> BillK , 22/2/2015 11:35 AM: On 21 February 2015 at 23:54, Anders Sandberg wrote: > Understanding these complications and that there likely is a big automation > shift matters. As does explaining it properly to decisionmakers. I am a bit > worried that right now it turns into a simplistic "The robots are coming, so > we need basic income", which means some politicians will immediately accept > or dismiss it depending on their views of basic income, and hence deduce > that robots are either a problem or not a problem... A starving population is a political problem. Note the millions receiving food stamps (and the millions in prison) in the USA and the desperate attempts in the UK to try and reduce welfare costs. You are missing my point. What is seen as a problem often depends on one's political outlook. And whether a problem is acknowledged may depend on whether the solutions are acceptable or not.? In the US poverty and incarceration are not seen as major problems by a large fraction of people. One strong reason IMHO is that many suggested solutions - redistribution, unified healthcare systems, a non-retributive penal system - are unacceptable to them for ideological reasons. Yes, this is totally backwards. In a sane world people would identify problems first, then look for solutions, and then agree on the acceptable ones. But in practice people turn things around. Which is why so many of your politicians are convinced there cannot be anthopogenic climate change - the proposed solutions smell bad ideologically.? http://www.aleph.se/andart/archives/2014/06/do_we_have_to_be_good_to_set_things_right.html So if you want to sell politicians on the idea that the robots are coming, do not link it too strongly to a particular socioeconomic remedy.? Otherwise, I foresee a real risk that we will end up with the US liberals embracing the robot big bang as a reason to have guaranteed basic income, and hence the US conservatives systematically blocking any research into AI consequences as a result. The end result might be no income and no safety at all.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sun Feb 22 15:17:37 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 22 Feb 2015 08:17:37 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: Message-ID: <54E9F311.6020304@canonizer.com> There is something that is responsible for a redness quality. And there is something detectably different, responsible for greenness. This must be, or the brain would not work. This qualitative difference is the most important part of how consciousness works. Zombie intelligence has hardware interpreters for every single representation, that abstract anything physical like this, away, and it doesn't matter if you represent it with redness, greenness, silicone, +5 volts, or whatever, as long as you have interpretation hardware to convert it into ones and zeros. To think that we can't isolate and discover this neural correlate of redness, at a minimum, by discovering how our consciousness does it, is just dumb thinking. John, surely you must agree that there is something, physical, that is responsible for an elemental redness quality, right? And why would you think we can't discover what this neural correlate is? Brent Allsop On 2/21/2015 11:17 PM, Stathis Papaioannou wrote: > > > On Sunday, February 22, 2015, John Clark > wrote: > > On Sat, Feb 21, 2015 Stathis Papaioannou > wrote: > > > If all the measurable inputs and outputs are replicated, how > is it possible that your brain can notice that there has been > a change in consciousness as a result of the replacement? > > > It's possible, although not probable, because consciousness is not > measurable. > > > Suppose there is a gross change in your consciousness as a result of > the replacement. If you are conscious then you should notice and be > able to report the change. But the replacement results in normal > output to all of your brain, including the part that would notice a > change and then send output to your speech centre. So if there is > a change in your consciousness, the change will either go unnoticed > or, if noticed, you will be unable to report it and will have to watch > helplessly as your vocal cords say everything is normal. > > > > > -- > Stathis Papaioannou > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Feb 22 17:23:09 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 22 Feb 2015 12:23:09 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On Sun, Feb 22, 2015 at 1:17 AM, Stathis Papaioannou wrote: > Suppose there is a gross change in your consciousness as a result of the > replacement. If you are conscious then you should notice and be able to > report the change. But the replacement results in normal output to all of > your brain, including the part that would notice a change and then send > output to your speech centre. So if there is a change in > your consciousness, the change will either go unnoticed or, if noticed, > you will be unable to report it and will have to watch helplessly as your > vocal cords say everything is normal. > Then you would know from direct experience something about your consciousness that nobody else knows, namely that although the replacement part reproduces all the inputs and outputs that can be measured in the lab there must be at least one I/O that can't be measured; and you'd also have proof that consciousness is not a byproduct of intelligence and that Darwin's theory was wrong, but unfortunately that proof would be available only to you. But this is not a new situation, you don't need exotic thought experiments, in real everyday life you know things with a certainty that goes beyond mere proof about your consciousness, but although you know it's true you have no way to prove it to others with mathematical rigor. John K Clark > > > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Sun Feb 22 18:06:07 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 22 Feb 2015 13:06:07 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: <54E9F311.6020304@canonizer.com> References: <54E9F311.6020304@canonizer.com> Message-ID: On Sun, Feb 22, 2015 Brent Allsop wrote: > There is something that is responsible for a redness quality. And there > is something detectably different, responsible for greenness. > I've already suggested what that difference might be, redness is associated with one group of crosslinked memories (strawberries, blood, sunsets, communists, conservative states) while greenness is associated with a different group of crosslinked memories (leaves, emeralds, seasickness, environmentalists). > > surely you must agree that there is something, physical, that is > responsible for an elemental redness quality, right? > I'd bet my life it's physical but I don't have a proof and never will. > > And why would you think we can't discover what this neural correlate is? > You could get to the point where you discover that whenever a certain pattern of neurons fire the person always makes a noise with his mouth that sounds like "I perceive redness", and that would satisfy me. When we get to that point I'd say we're done with this and it's time to move on and start to investigate something else, but apparently you would not and feel there would be more to do. The difference between us is I think consciousness is fundamental but you do not. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sun Feb 22 18:40:29 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 22 Feb 2015 11:40:29 -0700 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54E9F311.6020304@canonizer.com> Message-ID: <54EA229D.60904@canonizer.com> On 2/22/2015 11:06 AM, John Clark wrote: > On Sun, Feb 22, 2015 Brent Allsop > wrote: > > > There is something that is responsible for a redness quality. > And there is something detectably different, responsible for > greenness. > > > I've already suggested what that difference might be, redness is > associated with one group of crosslinked memories (strawberries, > blood, sunsets, communists, conservative states) while greenness is > associated with a different group of crosslinked memories (leaves, > emeralds, seasickness, environmentalists). > > To me there is LOTS of evidence that falsifies this view, so it in no way works in my model because it is so inconsistent with so much of what we know. You can find examples (brain malfunctions, drug induced...) where all colors become completely disassociated with all the stuff you talk about, and exist completely independent of all of them. Steven Lehar is an experienced psychonaught, and I bet he could take you through a drug trip that would prove to you what a greenness quality (and other qualities you've never experienced before), can exist not bound to any other information but the quality, itself. You are thinking about compost qualia, and surely you must agree that all of this kind of bound together stuff can be isolated, separated, and fail, independently of the other, and reduced to an elemental level. What you are doing is almost exactly what I point out, in the paper, when I say: " *some tend to think of the actual redness quality as being part of the strawberry being perceived, or worse, they think it is nothing real at all. When they think of the term qualia, they think of everything bound to it, but the ***redness* quality.*"** Also, you definition of qualia is so vague, it is of absolutely no use to a theoretician or scientist, because there is no way to prove if you're ill defined, whatever it is, could be, in physical terms. How, exactly, would you reproduce whatever you think redness is, artificially? And just the fact that you think so much of what is obviously easy, you think is so impossible, and not approachable via science, is completely off putting. Brent Allsop -------------- next part -------------- An HTML attachment was scrubbed... URL: From markalanwalker at gmail.com Sun Feb 22 19:35:05 2015 From: markalanwalker at gmail.com (Mark Walker) Date: Sun, 22 Feb 2015 12:35:05 -0700 Subject: [ExI] The Robot Big Bang In-Reply-To: <2980987749-16930@secure.ericade.net> References: <2980987749-16930@secure.ericade.net> Message-ID: Anders, you may be right about the cause of resistance: ideologically pungent solutions. In practice, it is pretty hard to discuss problems without also discussing proposed solutions. Even if you testify before Congress about technological unemployment and take a vow of silence about possible solutions, not everyone will keep quiet. Perhaps the best way to get people to accept there is a problem is to offer a broader range of ideological acceptable solutions. For example, I suspect many conservatives here in the US would like the solution of making soylent green out of redundant workers as opposed to BIG (basic income guarantee) for those displaced by robots. I'm writing a book on BIG to be published later this year. One of the arguments for BIG is that it will help soften the blow of technological unemployment. I also argue BIG will increase gross national happiness and gross national freedom. One of the arguments that I think conservative types will hate the most is that BIG should be considered to be like a stock dividend. eBay shows us how to make money owning a market. The US is a much more successful market than eBay, so it offers much greater potential for profit making. The owners of the US market (US citizens) shouldn't run their market like a dilapidated hippy co-op, but should try to maximize profit in the same way that eBay does. This profit, then, can be redistributed to shareholders in the form a dividend (BIG). http://philos.nmsu.edu/files/2014/07/chapter-3-BIG-BOOK-2015.docx Cheers, Mark Dr. Mark Walker Richard L. Hedden Chair of Advanced Philosophical Studies Department of Philosophy New Mexico State University P.O. Box 30001, MSC 3B Las Cruces, NM 88003-8001 USA http://www.nmsu.edu/~philos/mark-walkers-home-page.html On Sun, Feb 22, 2015 at 7:57 AM, Anders Sandberg wrote: > BillK , 22/2/2015 11:35 AM: > > On 21 February 2015 at 23:54, Anders Sandberg wrote: > > > Understanding these complications and that there likely is a big > automation > > shift matters. As does explaining it properly to decisionmakers. I am a > bit > > worried that right now it turns into a simplistic "The robots are > coming, so > > we need basic income", which means some politicians will immediately > accept > > or dismiss it depending on their views of basic income, and hence deduce > > that robots are either a problem or not a problem... > > > A starving population is a political problem. > Note the millions receiving food stamps (and the millions in prison) > in the USA and the desperate attempts in the UK to try and reduce > welfare costs. > > > > You are missing my point. What is seen as a problem often depends on one's > political outlook. And whether a problem is acknowledged may depend on > whether the solutions are acceptable or not. > > In the US poverty and incarceration are not seen as major problems by a > large fraction of people. One strong reason IMHO is that many suggested > solutions - redistribution, unified healthcare systems, a non-retributive > penal system - are unacceptable to them for ideological reasons. Yes, this > is totally backwards. In a sane world people would identify problems first, > then look for solutions, and then agree on the acceptable ones. But in > practice people turn things around. Which is why so many of your > politicians are convinced there cannot be anthopogenic climate change - the > proposed solutions smell bad ideologically. > > > http://www.aleph.se/andart/archives/2014/06/do_we_have_to_be_good_to_set_things_right.html > > So if you want to sell politicians on the idea that the robots are coming, > do not link it too strongly to a particular socioeconomic remedy. > > Otherwise, I foresee a real risk that we will end up with the US liberals > embracing the robot big bang as a reason to have guaranteed basic income, > and hence the US conservatives systematically blocking any research into AI > consequences as a result. The end result might be no income and no safety > at all. > > > > Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford > University > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Feb 22 19:40:35 2015 From: pharos at gmail.com (BillK) Date: Sun, 22 Feb 2015 19:40:35 +0000 Subject: [ExI] The Robot Big Bang In-Reply-To: <2980987749-16930@secure.ericade.net> References: <2980987749-16930@secure.ericade.net> Message-ID: On 22 February 2015 at 14:57, Anders Sandberg > You are missing my point. What is seen as a problem often depends on one's > political outlook. And whether a problem is acknowledged may depend on > whether the solutions are acceptable or not. > > In the US poverty and incarceration are not seen as major problems by a > large fraction of people. One strong reason IMHO is that many suggested > solutions - redistribution, unified healthcare systems, a non-retributive > penal system - are unacceptable to them for ideological reasons. Yes, this > is totally backwards. In a sane world people would identify problems first, > then look for solutions, and then agree on the acceptable ones. But in > practice people turn things around. Which is why so many of your politicians > are convinced there cannot be anthopogenic climate change - the proposed > solutions smell bad ideologically. > > http://www.aleph.se/andart/archives/2014/06/do_we_have_to_be_good_to_set_things_right.html > > So if you want to sell politicians on the idea that the robots are coming, > do not link it too strongly to a particular socioeconomic remedy. > > Otherwise, I foresee a real risk that we will end up with the US liberals > embracing the robot big bang as a reason to have guaranteed basic income, > and hence the US conservatives systematically blocking any research into AI > consequences as a result. The end result might be no income and no safety at > all. > Oh I didn't miss your point at all. :) But the robots are coming whether politicians like it or not. In fact politicians are encouraging robot development as they increase corporate profits. (Not necessarily increasing productivity - robots just need to be lower cost than humans to increase profits). If the politicians' ideology says that the solution to human unemployment is starvation, brutalisation and imprisonment of more and more unemployed people, then the future is bleak indeed. Hopefully democracy will prevail to help the people, but the corporate states that most western nations have developed appear to place a low value on unemployed or disabled people with little money. BillK From Carsten.Zander at t-online.de Sun Feb 22 19:46:18 2015 From: Carsten.Zander at t-online.de (Carsten Zander) Date: Sun, 22 Feb 2015 20:46:18 +0100 Subject: [ExI] The Robot Big Bang In-Reply-To: <2980987749-16930@secure.ericade.net> References: <2980987749-16930@secure.ericade.net> Message-ID: <54EA320A.9020905@t-online.de> On 22 February 2015 at 3:57 pm, Anders Sandberg wrote: > > You are missing my point. What is seen as a problem often depends on > one's political outlook. And whether a problem is acknowledged may > depend on whether the solutions are acceptable or not. > > In the US poverty and incarceration are not seen as major problems by > a large fraction of people. One strong reason IMHO is that many > suggested solutions - redistribution, unified healthcare systems, a > non-retributive penal system - are unacceptable to them for > ideological reasons. Yes, this is totally backwards. In a sane world > people would identify problems first, then look for solutions, and > then agree on the acceptable ones. But in practice people turn things > around. Which is why so many of your politicians are convinced there > cannot be anthopogenic climate change - the proposed solutions smell > bad ideologically. > > http://www.aleph.se/andart/archives/2014/06/do_we_have_to_be_good_to_set_things_right.html > > So if you want to sell politicians on the idea that the robots are > coming, do not link it too strongly to a particular socioeconomic remedy. > > Otherwise, I foresee a real risk that we will end up with the US > liberals embracing the robot big bang as a reason to have guaranteed > basic income, and hence the US conservatives systematically blocking > any research into AI consequences as a result. The end result might be > no income and no safety at all. > It's a dilemma. Convincing the people of a basic income is a very slow process. I'm afraid the Robot Big Bang will be faster. What would happen if the Robot Big Bang occurs and there is no basic income? I think telling the truth to the people would be the best way: "The robots are coming. All people will need a basic income" This give us a little hope: "Why the Tech Elite Is Getting Behind Universal Basic Income" http://www.vice.com/read/something-for-everyone-0000546-v22n1 Carsten From stathisp at gmail.com Sun Feb 22 22:13:19 2015 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 23 Feb 2015 09:13:19 +1100 Subject: [ExI] Zombie glutamate In-Reply-To: References: <54DFA05E.3070107@canonizer.com> <673463DD-BE4E-4FA7-BF92-5724E8241A2C@gmail.com> Message-ID: On 23 February 2015 at 04:23, John Clark wrote: > On Sun, Feb 22, 2015 at 1:17 AM, Stathis Papaioannou > wrote: > >> > Suppose there is a gross change in your consciousness as a result of the >> > replacement. If you are conscious then you should notice and be able to >> > report the change. But the replacement results in normal output to all of >> > your brain, including the part that would notice a change and then send >> > output to your speech centre. So if there is a change in your consciousness, >> > the change will either go unnoticed or, if noticed, you will be unable to >> > report it and will have to watch helplessly as your vocal cords say >> > everything is normal. > > > Then you would know from direct experience something about your > consciousness that nobody else knows, namely that although the replacement > part reproduces all the inputs and outputs that can be measured in the lab > there must be at least one I/O that can't be measured; and you'd also have > proof that consciousness is not a byproduct of intelligence and that > Darwin's theory was wrong, but unfortunately that proof would be available > only to you. But this is not a new situation, you don't need exotic thought > experiments, in real everyday life you know things with a certainty that > goes beyond mere proof about your consciousness, but although you know it's > true you have no way to prove it to others with mathematical rigor. What would happen if consciousness could be separated from behaviour is that your brain would go in one direction, controlling your body, while your mind would go off in another, as a helpless spectator. That would mean consciousness is not due to physical activity in the brain, contradicting the single initial assumption. If you grant that assumption, then consciousness is provably what you call a byproduct of intelligence. -- Stathis Papaioannou From pharos at gmail.com Sun Feb 22 22:37:12 2015 From: pharos at gmail.com (BillK) Date: Sun, 22 Feb 2015 22:37:12 +0000 Subject: [ExI] The Robot Big Bang In-Reply-To: <2980987749-16930@secure.ericade.net> References: <2980987749-16930@secure.ericade.net> Message-ID: On 22 February 2015 at 14:57, Anders Sandberg > What is seen as a problem often depends on one's > political outlook. And whether a problem is acknowledged may depend on > whether the solutions are acceptable or not. > > Let us hope that the proposed solutions to the 12 existential risks do not meet political ideological objections. Some strongly held beliefs are more important than life itself. BillK From anders at aleph.se Sun Feb 22 23:42:41 2015 From: anders at aleph.se (Anders Sandberg) Date: Mon, 23 Feb 2015 00:42:41 +0100 Subject: [ExI] The Robot Big Bang In-Reply-To: Message-ID: <3014305160-19191@secure.ericade.net> Mark Walker , 22/2/2015 8:38 PM: I'm writing a book on BIG to be published later this year. One of the arguments for BIG is that it will help soften the blow of technological unemployment. I also argue BIG will increase gross national happiness and gross national freedom. Make sure to clearly bring up the libertarian proponents of BIG. If conservatives find out that Friedman was in favour of it, and that it will cut welfare, then you can at least confuse them. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From danust2012 at gmail.com Mon Feb 23 00:25:58 2015 From: danust2012 at gmail.com (Dan) Date: Sun, 22 Feb 2015 16:25:58 -0800 Subject: [ExI] Basic Income Guarantee/was Re: The Robot Big Bang In-Reply-To: <3014305160-19191@secure.ericade.net> References: <3014305160-19191@secure.ericade.net> Message-ID: <498966A3-5455-4362-A395-20E076E02FEB@gmail.com> > On Feb 22, 2015, at 3:42 PM, Anders Sandberg wrote: > Mark Walker , 22/2/2015 8:38 PM: > I'm writing a book on BIG to be published later this year. One of the arguments for BIG is that it will help soften the blow of technological unemployment. I also argue BIG will increase gross national happiness and gross national freedom. > > Make sure to clearly bring up the libertarian proponents of BIG. If conservatives find out that Friedman was in favour of it, and that it will cut welfare, then you can at least confuse them. And not just Friedman. There are more recent libertarian supporters of the idea. See c4ss.org for some current discussions -- e.g., http://c4ss.org/content/25618 I would be worried about the "Bismarck" concern here. BIG could easily turn into tool if control and oppression -- and for stifling any threats to the elite. Regards, Dan See my Kindle books at: http://www.amazon.com/Dan-Ust/e/B00J6HPX8M/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From outlawpoet at gmail.com Mon Feb 23 01:16:38 2015 From: outlawpoet at gmail.com (justin corwin) Date: Sun, 22 Feb 2015 17:16:38 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: Hi Adrian! I didn't realize you were working in this area. On Mon, Feb 16, 2015 at 4:31 PM, Adrian Tymes wrote: > > http://cubecab.com/ > > We're working on it. > A dedicated cubesat launcher is definitely interesting. Are things too early stage for you to talk more specifics? Is this a new launcher, or are you buying/licensing a rocket from somebody else? You say you aren't rounding up lots of cubesats before launching, does that mean it's a relatively small launcher, like the ones from earlier in this thread? I see you won a business plan competition last year, presumably you had some compelling numbers, but I can't find a lot of details about your approach other than the focus on cubesats. > Also, currently small payloads often have to wait, which mean timely >> payloads need dedicated launchers. If you could guarantee a launch within >> six months, you might attract more business as well. >> > > Yep that's a definite pain point we've noticed. We are planning to launch > within six months of contract signing - possibly less, but the gating > factor at that point is getting government clearance (mainly FAA, probably > FCC, possibly NOAA & Department of Commerce, depending on who's launching > for who and what the satellite does). In theory we might be able to pull > sub-week turnarounds if all the agencies gave immediate approvals (which > would probably only happen for NASA or USAF emergencies). > Bureaucracy definitely is a drag, particularly when the margins aren't high enough to dedicate people to navigating that stuff enough to make it a side issue. I once looked to sea launch in international waters as a possible shortcut, but experience seems to show that people like Sea Launch and ESA in French Guiana end up complying with all that stuff *anyway* because their customers have requirements that interact with those bureaucracies. So there's no escaping it, for now, at least. I like thinking about it, but I worry that the margins just aren't high enough to deal with all the crap that being a real space company brings you. I would be very interested in your experiences in running/starting cubecab. -- Justin Corwin outlawpoet at gmail.com http://programmaticconquest.tumblr.com http://outlawpoet.tumblr.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Feb 23 02:45:12 2015 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 22 Feb 2015 18:45:12 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: On Sun, Feb 22, 2015 at 5:16 PM, justin corwin wrote: > Hi Adrian! I didn't realize you were working in this area. > I thought for sure I'd mentioned CubeCab on this list before. > A dedicated cubesat launcher is definitely interesting. Are things too > early stage for you to talk more specifics? > I can give some info. > Is this a new launcher, or are you buying/licensing a rocket from somebody > else? > New. > You say you aren't rounding up lots of cubesats before launching, does > that mean it's a relatively small launcher, like the ones from earlier in > this thread? > A tiny launcher with a payload capacity of 1 CubeSat. We're making 2 models: one for 1Us and one for 3Us. The 3U is easier - scaling rockets down is our biggest technical challenge - so we're working on that first. > I like thinking about it, but I worry that the margins just aren't high > enough to deal with all the crap that being a real space company brings > you. I would be very interested in your experiences in running/starting > cubecab. > Getting the cost down is part of the core of our development effort. I do not suppose you might be able to help? We have engineers (and you wouldn't be able to help with that anyway, unless you're local to the Silicon Valley area); what we need help with is all the other things that go into a business: marketing, fundraising, et cetera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Feb 23 03:12:15 2015 From: johnkclark at gmail.com (John Clark) Date: Sun, 22 Feb 2015 22:12:15 -0500 Subject: [ExI] Zombie glutamate In-Reply-To: <54EA229D.60904@canonizer.com> References: <54E9F311.6020304@canonizer.com> <54EA229D.60904@canonizer.com> Message-ID: On Sun, Feb 22, 2015 Brent Allsop wrote: >> I've already suggested what that difference might be, redness is > associated with one group of crosslinked memories (strawberries, blood, > sunsets, communists, conservative states) while greenness is associated > with a different group of crosslinked memories (leaves, emeralds, > seasickness, environmentalists). > To me there is LOTS of evidence that falsifies this view, so it in no way > works in my model because it is so inconsistent with so much of what we > know. You can find examples (brain malfunctions, drug induced...) where > all colors become completely disassociated with all the stuff you talk > about, How does that disprove anything? Sure you can screw around with the crosslinks brain so that they're different, you could change them so you could smell colors and see sounds, but so what? > What you are doing is almost exactly what I point out, in the paper, when > I say: > " some tend to think of the actual redness quality as being part of the > strawberry being perceived" Well it looks like it's strawman time again. I'd have to be a fool to believe that the subjective experience of redness is embodied in the strawberry itself and I am not a fool. > > or worse, they think it is nothing real at all. And that would be even more foolish, subjectivity is the most real most concrete and least abstract thing there is. > Also, you definition of qualia is so vague, it is of absolutely no use to > a theoretician or scientist, Exactly, a theoretical scientist investigating qualia would be nothing put a pointless timesink because qualia like consciousness is fundamental. > because there is no way to prove if you're ill defined First of all in mathematics you can prove things but in science although you can find evidence you can't prove theories only disprove them. And it's true that like most important things in life I don't have a definition of qualia that's worth a damn and you don't either, but we have examples. > > > How, exactly, would you reproduce whatever you think redness is, > artificially? A even better question is how could you convince somebody else that you had been successful. And even in general I can't imagine what sort of physical explanation would satisfy you. If I said A causes B and B cause C and C causes consciousness you would ask but why does C cause consciousness? If I answered because C causes D and D causes E and E causes consciousness you would ask why does E cause consciousness? The chain of "why" questions cannot continue forever, or at least you can't expect a useful answer forever, because eventually you will reach an irreducible axiom of existence, such as consciousness is the way data feels like when it is being processed. Consciousness is in no way unique in this regard, start with any physical phenomena and eventually the chain of "why" questions concerning it will converge asymptotically to "Why is "is" is?". When that starts to happen its a sure sign that your time could be better spent explaining other physical phenomenon. I mean it's not as if there isn't anything else to do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Mon Feb 23 05:47:31 2015 From: avant at sollegro.com (Stuart LaForge) Date: Mon, 23 Feb 2015 05:47:31 +0000 Subject: [ExI] Biochemical Chauvanism (was Re: Zombie glutamate) In-Reply-To: References: Message-ID: <20150223054731.Horde.c_i7uhG1op9CKeByDoBRAg6@secure88.inmotionhosting.com> Quoting extropy-chat-request at lists.extropy.org: Quoting John Clark: > Date: Sat, 21 Feb 2015 22:05:30 -0500 > From: John Clark > To: ExI chat list > Subject: Re: [ExI] Zombie glutamate > Message-ID: > > Content-Type: text/plain; charset="utf-8" > The shape is all important in biology but if its function is translated > into another medium such as electronics then the shape of the original > molecule is utterly irrelevant and the only important thing is the bit of > information it carries. There are several problems with your top down approach to simulation: First, to use your own analogy, you keep looking for the message in the bottle but there is none. The bottle itself is the message, and the recipient interprets the message however they will. Similarly the purpose of a molecule is not magically stamped onto it somewhere like some message to be read. Now don't get me wrong, there are a whole lot of bits of information in every molecule but that is the literal intrinsic information of its existence. Like the quantum mechanical wavefunction that determines the molecules' location, shape, charge distribution, chemical concentration, etc. But there is no "bit of information" that it carries that maps to its function in an organism. Furthermore a given molecule often serves different purposes in different cells in the same organism. In other words that abstract functional information you want is entirely context dependent and is therefore a property of the system that the molecule is a part of and not of the molecule itself. The only information intrinsic to the molecule itself is its quantum mechanical shape and its location in space-time relative to other molecules. Form + Context -> Function To use your bottle analogy, when Alice made the bottle, she used to as a container in which to give a tasty beverage to Bob. When Bob threw the bottle into the ocean he was using it as a floatation device to see which way the current was flowing. And when Carrol found the bottle on the beach, she put flowers in it and used it as a vase. The only sense in which the bottle is a "message" is if there was some sort of prearranged agreement between the three actors as to what the bottle "means". Yet it is entirely possible that a system spontaneously arises wherein Alice, Bob, and Carol repeatedly do exactly what I described to the benefit of all three, without any agreement at all. In that situation, what is the message of the bottle? What is its function? > Oh no are we really going back to the Beckenstein bound, something that > virtually no biologist thinks is of the slightest importance? Very well if > you want to play that silly game, my iMac has a larger surface area > than your brain brain therefore according to Beckenstein it contains more > information than more information than your brain. QED > > Yes it's a silly game but you're the one who wanted to play. Oh come now, debate tricks like red herrings are beneath this discussion. I repeatedly stated that the Beckenstein bound is an *upper limit* to the amount of quantum mechanical information that a molecule, brain, or other system could possibly contain. Yes most biologists would not think it was important. So what? I am not most biologists. And yes by virtue of its larger size, it is theoretically possible within the laws of physics for your iMac to someday to contain more information than a human brain. Lets just hope we as a species survive long enough for that to happen. > >>> All schematics are, by necessity, simplifications of the real deal. > > > Yes, a good simplification gets rid of pointless wheels within wheels and > gets to the essentials. Essentials? I thought you didn't believe in immaterial essences? By general relativity, epicycles are a perfectly valid description of the solar system because all reference frames are valid. The math is a lot more complicated when you make earth the center of the solar system instead of the sun but the math still works out perfectly fine. There are no essentials, just details you leave in or out of your model to, on one hand make it more accurate, or on the other hand, simplify it. Eliminating the wheels within wheels from your model of the solar system would simplify it. Eliminating the wheels within wheels from your grandfather clock would simply *break* it. > No the schematics alone won't get me to work because a car is a noun, and > brick a noun too so a can't build a house I can live in with the simulation > of a brick, but when you're talking about information things are very > different. My calculator does real arithmetic not simulated arithmetic and > my iPod plays real music not simulated music. So the question you have to > ask yourself is are you more like a symphony or more like a brick? Your calculator and your iPod are both real. But your calculator merely computes patterns of light and dark on a display and your iPod merely outputs patterns of sound waves. The math and the music are both properties of you and not your devices or more precisely the system composed of you *and* your devices. And like epicycles, the answer to your question depends on your point of a view. To those that perceive me, I am more like a symphony. To those that don't, I am more like a brick. > A simulated flame is certainly not identical to a real flame but to say it > has absolutely no reality can lead to problems. Suppose you say that for a > fire to be real it must have some immaterial essence of fire, a sort of > "burning" soul, thus a simulated flame does not really burn because it just > changes the pattern in a computer memory. The trouble is, using the same > reasoning you could say that a real fire doesn't really burn, it just > oxidizes chemicals; but really a flame can't even do that, it just obeys > the laws of chemistry. If we continue with this we soon reach a point where > nothing is real but the fundamental laws of physics, I don't think either > of us want's to embrace that position. Yes information is real, yes it causes shit to happen, but it only really *means* anything to an observer capable of decoding and processing it. There is no essence other than the details you choose to pay attention to and those that don't. Nature pays attention to everything all the time. You do not. Therefore essences only exists in your mind and not in nature. Your model of how information causes a system to behave therefore will be incomplete unless you simulate it from the bottom up. Once you do that, then you can try to simplify or re-engineer your system without breaking it. That way you can empirically determine a necessary and sufficient set of functional parts instead of dealing with sloppy simplifications that could leave out crucial components. As far as your question of whether anything other than the fundamental laws of physics are real, have you read up on string theory lately? Supposedly the universe is a giant hologram whose true reality is bits/pixels of information moving around on the 2-D surface of the spherical event horizon of the visible universe projected into the 3-D space within. Do those pixels move around on the surface of the sphere faster than light? *WILD* > I think a simulated flame is real at one level but care must be taken not > to confuse levels. A simulated flame won't burn your computer but it will > burn a simulated object. A real flame won't burn the laws of chemistry but > it will burn your finger. A simulated flame will only burn a simulated object if a programmer instructs it to or if the programmer simulates *everything*. If a programmer doesn't know or care about that aspect of the flame, he won't incorporate that function. A programmer cannot simulate what he doesn't understand so he can only try to simulate everything about a system that he understands. That may or may not be enough. >> Would you as an upload even be aware that that component of your mind was >> missing? >> > > If it's important to me I'd notice, if it's not I don't care if it's > missing. If it was truly important, you would no longer exist to notice or care it was missing. If you doubt me, just ask Garrosh Hellscream about free agency. ;-) From outlawpoet at gmail.com Mon Feb 23 08:34:25 2015 From: outlawpoet at gmail.com (justin corwin) Date: Mon, 23 Feb 2015 00:34:25 -0800 Subject: [ExI] darpa's notion of using a retrofitted fighter jet to launch payloads In-Reply-To: References: <001201d04241$8a095800$9e1c0800$@att.net> <014701d0447d$f9743e10$ec5cba30$@att.net> Message-ID: Interesting stuff Adrian, On Sun, Feb 22, 2015 at 6:45 PM, Adrian Tymes wrote: > On Sun, Feb 22, 2015 at 5:16 PM, justin corwin > wrote: > >> Hi Adrian! I didn't realize you were working in this area. >> > > I thought for sure I'd mentioned CubeCab on this list before. > That's my bad, searching my archives proves you have, I just have been pretty absentee on this list for some time. I see you won that NewSpace business plan contest last year, good stuff. > You say you aren't rounding up lots of cubesats before launching, does >> that mean it's a relatively small launcher, like the ones from earlier in >> this thread? >> > > A tiny launcher with a payload capacity of 1 CubeSat. We're making 2 > models: one for 1Us and one for 3Us. The 3U is easier - scaling rockets > down is our biggest technical challenge - so we're working on that first. > I can imagine, as the fractions get tighter, engineering gets complicated. Do you have a unique solution you are willing to talk about? One of the things I noticed is that your website is very light on details. That can be a good or a bad thing, but particularly with a new launcher, vagueness is problematic. > I do not suppose you might be able to help? We have engineers (and you > wouldn't be able to help with that anyway, unless you're local to the > Silicon Valley area); what we need help with is all the other things that > go into a business: marketing, fundraising, et cetera. > Well I'm down in LA, and my specialty is software. But I have been involved in a few startups and crowdfunding campaigns. I've gotten pretty good at writing copy and tweaking spreadsheets and slidedecks. Send me a private email offlist and we can see if there's anything I can help with. -- Justin Corwin outlawpoet at gmail.com http://programmaticconquest.tumblr.com http://outlawpoet.tumblr.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From markalanwalker at gmail.com Mon Feb 23 15:21:16 2015 From: markalanwalker at gmail.com (Mark Walker) Date: Mon, 23 Feb 2015 08:21:16 -0700 Subject: [ExI] The Robot Big Bang In-Reply-To: <3014305160-19191@secure.ericade.net> References: <3014305160-19191@secure.ericade.net> Message-ID: Yes, Friedman is mentioned briefly. It is interesting that BIG is a policy rather than a political philosophy, hence, there is a certain convergence between otherwise radically different views. For example, radical socialists also endorse BIG as a second best remedy. I confess I am not sure I understand Friedman's view. He seems to argue the conditional: If you are going to give welfare, then we should have BIG. As I understand him, his argument is twofold: one reason has to do with efficiency. It cuts out a small army of bureaucrats managing welfare programs. BIG also stops a lot of the paternalism of contemporary welfare programs. However, I don't see Friedman arguing straight up that we ought to provide BIG. Perhaps I am wrong. I confess I haven't studied Friedman extensively. Thanks for the suggestion! Mark Dr. Mark Walker Richard L. Hedden Chair of Advanced Philosophical Studies Department of Philosophy New Mexico State University P.O. Box 30001, MSC 3B Las Cruces, NM 88003-8001 USA http://www.nmsu.edu/~philos/mark-walkers-home-page.html On Sun, Feb 22, 2015 at 4:42 PM, Anders Sandberg wrote: > Mark Walker , 22/2/2015 8:38 PM: > > I'm writing a book on BIG to be published later this year. One of the > arguments for BIG is that it will help soften the blow of technological > unemployment. I also argue BIG will increase gross national happiness and > gross national freedom. > > > > Make sure to clearly bring up the libertarian proponents of BIG. If > conservatives find out that Friedman was in favour of it, and that it will > cut welfare, then you can at least confuse them. > > > > Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford > University > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avant at sollegro.com Mon Feb 23 19:46:02 2015 From: avant at sollegro.com (Stuart LaForge) Date: Mon, 23 Feb 2015 19:46:02 +0000 Subject: [ExI] Black hole brains (was Re: taxonomy for fermi paradox fans) Message-ID: <20150223194602.Horde.SKijkCDub_G-C9loFwtHUQ6@secure88.inmotionhosting.com> Here is a crazy new scenario to add to the list: Perhaps the time period between a civilization developing recursively self-improving general AI and its subsequent development of computronium is relatively short compared to geologic time scales. Computronium, being the maximally optimized medium for computation, quickly saturates the Beckenstein bound of their region of space-time by being so information dense. This causes their space-time to warp to the point of pinching itself off, forming an event horizon around them. This effectively renders the civilization a black hole to those observers still in comparatively flat space-time meaning that since no information can escape the event horizon, no civilization outside the black hole can detect the civilization inside the black hole. Meanwhile, inside the black hole, the post-singularity civilization effectively exists in its own universe, with mass-energy and information continually pouring in from the outside. Thus, limited perhaps to competition between like civilizations, the civilization in question can grow to massive proportions becoming a Kardeshev scale type 3 civilization, controlling their galaxy by becoming the billion solar mass black hole galactic nucleus. Thereby secretly ruling a galaxy without ever leaving home. If any of the authors on the list would like to write a science fiction novel on this premise, I would gladly co-author or consult with them for a share of the profits. :-) Stuart LaForge From connor_flexman at brown.edu Mon Feb 23 22:30:55 2015 From: connor_flexman at brown.edu (Flexman, Connor) Date: Mon, 23 Feb 2015 17:30:55 -0500 Subject: [ExI] Black hole brains (was Re: taxonomy for fermi paradox fans) In-Reply-To: <20150223194602.Horde.SKijkCDub_G-C9loFwtHUQ6@secure88.inmotionhosting.com> References: <20150223194602.Horde.SKijkCDub_G-C9loFwtHUQ6@secure88.inmotionhosting.com> Message-ID: On Mon, Feb 23, 2015 at 2:46 PM, Stuart LaForge wrote: > > Perhaps the time period between a civilization developing recursively > self-improving general AI and its subsequent development of computronium is > relatively short compared to geologic time scales. Computronium, being the > maximally optimized medium for computation, quickly saturates the > Beckenstein bound of their region of space-time by being so information > dense. This causes their space-time to warp to the point of pinching itself > off, forming an event horizon around them. This effectively renders the > civilization a black hole to those observers still in comparatively flat > space-time meaning that since no information can escape the event horizon, > no civilization outside the black hole can detect the civilization inside > the black hole. > > Meanwhile, inside the black hole, the post-singularity civilization > effectively exists in its own universe, with mass-energy and information > continually pouring in from the outside. Thus, limited perhaps to > competition between like civilizations, the civilization in question can > grow to massive proportions becoming a Kardeshev scale type 3 civilization, > controlling their galaxy by becoming the billion solar mass black hole > galactic nucleus. Thereby secretly ruling a galaxy without ever leaving > home. > I think the downside to this proposal is that inside a black hole's event horizon, all paths through spacetime point radially inward. Then not only would they only be able to receive information from the outside world but not give any, but they also would only be able to receive information from people outside their radial distance, and not communicate information back out to them. Also note that present consensus generally seems to favor that information can indeed leave the event horizon, for information is not destroyed. Anyone with better knowledge of information theory than I have: what is the relationship between computronium, the Bekenstein limit, and black holes? Are all three reached at the same point? Don't other limits to computation halt the amount of information we can cram into computronium before we reach the Bekenstein limit and create a black hole? Connor -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Mon Feb 23 22:50:49 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 23 Feb 2015 17:50:49 -0500 Subject: [ExI] Biochemical Chauvanism (was Re: Zombie glutamate) In-Reply-To: <20150223054731.Horde.c_i7uhG1op9CKeByDoBRAg6@secure88.inmotionhosting.com> References: <20150223054731.Horde.c_i7uhG1op9CKeByDoBRAg6@secure88.inmotionhosting.com> Message-ID: On Mon, Feb 23, 2015 Stuart LaForge wrote: > First, to use your own analogy, you keep looking for the message in the > bottle but there is none. The bottle itself is the message Yes, and one bottle is identical to another and one molecule of the hormone adrenaline is identical to another molecule of adrenaline so each molecule (or bottle) carries the same identical message (Mr. Heart beat faster) and if it's desirable for the heart to beat even faster the only solution is for the adrenaline gland to secrete more molecules of adrenaline into the bloodstream and wait for them to randomly diffuse to the heart. So you've got a extremely slow extremely extremely low bandwidth extremely primitive communication system. > > purpose of a molecule is not magically stamped onto it somewhere like > some message to be read. > It's not magical it's chemical, and purpose implies intention and molecules have none. But Messenger RNA can certainly read the message in DNA and copy it and translate it into the RNA language, and Ribosomal RNA can certainly read Messenger RNA and translate the RNA/nucleotide language into the amino-acid/protein language. > > But there is no "bit of information" that it carries that maps to its > function in an organism. It's not a map it's a recipe, and in the entire human genome there are only 3 billion base pairs. There are 4 bases so each base can represent 2 bits, there are 8 bits per byte so that comes out to just 750 meg, and that's enough assembly instructions to make not just a brain and all its wiring but a entire human baby. So the instructions MUST contain wiring instructions such as "wire a neuron up this way and then repeat that procedure exactly the same way 917 billion times". And there is a huge amount of redundancy in the human genome so if you used a file compression program like ZIP on that 750 meg you could easily put the entire thing on a CD, not a DVD not a Blu ray just a old fashioned steam powered vanilla CD, and you'd still have room for dozens of lady Gaga songs. > > Furthermore a given molecule often serves different purposes in > different cells in the same organism. Irrelevant, one molecule is identical to another so whatever the message is it's identical in all of the molecules. And by the way, there are only about 200 different types of cells in the human body. > > The only sense in which the bottle is a "message" is if there was some > sort of prearranged agreement between the three actors as to what the > bottle "means". And that is exactly what happens in both biology and electrical engineering, the adrenal gland has a agreement with the heart what adrenalin means the memory chip and microprocessor chip in my computer have a similar agreement. > > Yet it is entirely possible that a system spontaneously arises wherein > Alice, Bob, and Carol repeatedly do exactly what I described to the benefit > of all three, without any agreement at all. In that situation That won't work if Carol doesn't understand Alice's language, or at least it won't work unless Carol understands Bob and Bob understands Alice and Bob can translate; and that's what happens in biology with DNA-RNA-protein. >> Oh no are we really going back to the Beckenstein bound, something >> that virtually no biologist thinks is of the slightest importance? Very >> well if you want to play that silly game, my iMac has a larger surface >> area than your brain brain therefore according to Beckenstein it contains >> more information than more information than your brain. QED >> Yes it's a silly game but you're the one who wanted to play. >> > > > Oh come now, debate tricks like red herrings are beneath this discussion. LOOK WHO'S TALKING! You're the one who brought up the Beckenstein bound not me, so if you say it's relevant to biology (and it's not) then I can say it's relevant to electrical engineering. > > I repeatedly stated that the Beckenstein bound is an *upper limit* to > the amount of quantum mechanical information that a molecule, brain, or > other system could possibly contain. And the speed of light is the *upper limit* to how fast my car can go, and now you've learned as much about my car as I learned about the brain when you said the Beckenstein bound is an *upper limit* to the amount it can contain. > > Yes most biologists would not think it [ the Beckenstein bound] was > important. So what? So it's not important. In fact I wouldn't be surprised if most biologists, even most Nobel Prize winning biologists, have never even heard of the Beckenstein bound and their ignorance has not hampered their biological work one iota. > > And yes by virtue of its larger size, it is theoretically possible > within the laws of physics for your iMac to someday to contain more > information than a human brain. My computer already contains more information than your brain and according to Beckenstein so does a rock provided it has a larger surface area than your brain. Do you think this is important? I don't. > Yes, a good simplification gets rid of pointless wheels within wheels >> and gets to the essentials. >> >> Essentials? > Yes essentials. > >I thought you didn't believe in immaterial essences? Oh, is finding shorter and faster algorithms or avoiding and shuning repetition and unnecessary inessential and needless repetition and redundancy and repetition supposed to be metaphysical now? But to tell the truth I do sorta kinda have a metaphysical side, I believe that Information is as close as you can get to the traditional concept of the soul and still remain within the scientific method. Consider the similarities: The soul is non material and so is information. It's difficult to pin down a unique physical location for the soul, and the same is true for information. The soul is the essential, must have, part of consciousness, exactly the same situation is true for information. The soul is immortal and so, potentially, is information. However there are important differences: A soul is unique but information can be duplicated. The soul is and will always remain unfathomable, but information is understandable, in fact, information is the ONLY thing that is understandable. Information unambiguously exists, I don't think anyone would deny that, but if the soul exists it will never be proven scientifically. > > By general relativity, epicycles are a perfectly valid description of > the solar system because all reference frames are valid. That is incorrect. No finite number of epicycles will ever do as good a job as Newton, much less Einstein, because planets don't move in circles, they move in ellipses. Kepler knew that 400 years ago. > > your calculator merely computes patterns of light and dark on a display > and your iPod merely outputs patterns of sound waves. And your brain merely sends electrochemical signals (about one million times slower than the signals in my computer) from one neuron to another. When you break something down into smaller and smaller and simpler and simpler parts then eventually, no matter how gran and glorious and beautiful it is, you will come to a part where you have to use the word "merely" to describe what it does. > > > Yes information is real, yes it causes shit to happen, but it only > really *means* anything to an observer capable of decoding and processing > it. The microprocessor in my computer knows how to decode and process the information it received from the disk drive and Ribosomal RNA knows how to decode and process the information in messenger RNA. If you waht to call a microprocessor and Ribosomal RNA "observers" that's up to you. > A simulated flame will only burn a simulated object if a programmer > instructs it to Not necessarily, a programmer might not know and have no way of knowing that a simulated flame that would even appear in his simulation much less that it would burn up a simulated object in it. A purely deterministic computer can behave in ways that the programmer did not and could not expect. It would only take a few minutes to write a program to look for the first even number greater than 2 that is not the sum of two prime numbers and then stop. But will the machine ever stop? I don't know, you don't know, even the computer doesn't know. Maybe it will stop in the next 5 seconds, maybe it will stop in 50 billion years, and maybe it will never stop. If you want to know what the machine will actually do you just have to watch it and see, and you might be watching forever. And just like us the machine doesn't know what it will do until it actually does it. And if a little 5 line program can be that unpredictable think about a 5 million or 5 billion line simulation program. > > A programmer cannot simulate what he doesn't understand If that were true there would be no point in simulation anything. > he can only try to simulate everything about a system that he > understands. Why simulate something if you already know what it will do? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 24 03:08:54 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 23 Feb 2015 22:08:54 -0500 Subject: [ExI] Black hole brains (was Re: taxonomy for fermi paradox fans) In-Reply-To: References: <20150223194602.Horde.SKijkCDub_G-C9loFwtHUQ6@secure88.inmotionhosting.com> Message-ID: On Mon, Feb 23, 2015 Flexman, Connor wrote: > Perhaps the time period between a civilization developing recursively > self-improving general AI and its subsequent development of computronium is > relatively short compared to geologic time scales. Computronium, being the > maximally optimized medium for computation, quickly saturates the > Beckenstein bound > Computronium uses Nanotechnology and makes uses parts of about 10^-9 meters length (that's about 10 atoms in length). For the Beckenstein bound to become relevant the parts would have to approach the Planck length of 10^ -35 meters, that's a 100 million billion billion shorter with a million trillion trillion trillion trillion trillion trillion times less volume. Nanotechnology would require no new discoveries in fundamental physics just improvements in technology, but building machines at the Planck scare would require new physics and as far as we know now can't be done. The Beckenstein bound is important if you want to talk about Black Holes but not for much else. John K Clark > of their region of space-time by being so information dense. This causes >> their space-time to warp to the point of pinching itself off, forming an >> event horizon around them. This effectively renders the civilization a >> black hole to those observers still in comparatively flat space-time >> meaning that since no information can escape the event horizon, no >> civilization outside the black hole can detect the civilization inside the >> black hole. >> >> Meanwhile, inside the black hole, the post-singularity civilization >> effectively exists in its own universe, with mass-energy and information >> continually pouring in from the outside. Thus, limited perhaps to >> competition between like civilizations, the civilization in question can >> grow to massive proportions becoming a Kardeshev scale type 3 civilization, >> controlling their galaxy by becoming the billion solar mass black hole >> galactic nucleus. Thereby secretly ruling a galaxy without ever leaving >> home. >> > > > I think the downside to this proposal is that inside a black hole's event > horizon, all paths through spacetime point radially inward. Then not only > would they only be able to receive information from the outside world but > not give any, but they also would only be able to receive information from > people outside their radial distance, and not communicate information back > out to them. > > Also note that present consensus generally seems to favor that information > can indeed leave the event horizon, for information is not destroyed. > > Anyone with better knowledge of information theory than I have: what is > the relationship between computronium, the Bekenstein limit, and black > holes? Are all three reached at the same point? Don't other limits to > computation halt the amount of information we can cram into computronium > before we reach the Bekenstein limit and create a black hole? > > Connor > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Tue Feb 24 03:14:20 2015 From: kanzure at gmail.com (Bryan Bishop) Date: Mon, 23 Feb 2015 21:14:20 -0600 Subject: [ExI] Black hole brains (was Re: taxonomy for fermi paradox fans) In-Reply-To: <20150223194602.Horde.SKijkCDub_G-C9loFwtHUQ6@secure88.inmotionhosting.com> References: <20150223194602.Horde.SKijkCDub_G-C9loFwtHUQ6@secure88.inmotionhosting.com> Message-ID: On Mon, Feb 23, 2015 at 1:46 PM, Stuart LaForge wrote: > Here is a crazy new scenario to add to the list: here you go, have fun: http://diyhpl.us/~bryan/papers2/physics/astrophysics/black-holes/Black%20holes:%20attractors%20for%20intelligence%3f.pdf http://diyhpl.us/~bryan/papers2/physics/astrophysics/black-holes/Are%20black%20hole%20starships%20possible%3f.pdf - Bryan http://heybryan.org/ 1 512 203 0507 From johnkclark at gmail.com Tue Feb 24 03:41:46 2015 From: johnkclark at gmail.com (John Clark) Date: Mon, 23 Feb 2015 22:41:46 -0500 Subject: [ExI] Black hole brains (was Re: taxonomy for fermi paradox fans) In-Reply-To: References: <20150223194602.Horde.SKijkCDub_G-C9loFwtHUQ6@secure88.inmotionhosting.com> Message-ID: To give a idea of the Planck length where the Beckenstein bound becomes important, if the fingernail on your little finger was enlarged to the size of the entire observable universe then the Planck length would be as long as your fingernail was before the expansion. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Tue Feb 24 15:10:02 2015 From: anders at aleph.se (Anders Sandberg) Date: Tue, 24 Feb 2015 16:10:02 +0100 Subject: [ExI] Black hole brains (was Re: taxonomy for fermi paradox fans) In-Reply-To: <20150223194602.Horde.SKijkCDub_G-C9loFwtHUQ6@secure88.inmotionhosting.com> Message-ID: <3155309208-30288@secure.ericade.net> Stuart LaForge , 23/2/2015 8:49 PM: Perhaps the time period between a civilization developing recursively ? self-improving general AI and its subsequent development of ? computronium is relatively short compared to geologic time scales. ? Computronium, being the maximally optimized medium for computation, ? quickly saturates the Beckenstein bound of their region of space-time ? by being so information dense. This causes their space-time to warp to ? the point of pinching itself off, forming an event horizon around ? them The problem with standard spacetimes with localized event horizons is that they have curvature singularities on the inside, and all timelike paths crossing the horizon will intersect the singularity in finite proper time. In short, the computronium will not last long in its own reference frame. I think there are some rather strong theorems (Penrose?) showing this.? Now, rotating black holes have fairly complicated interiors and can in theory contain closed timelike curves, which blows most standard computronium out of the water since they allow future results to adjust past inputs: you get a class of hyperturing computation (check out Scott Aaronson's paper on CTC computing http://arxiv.org/abs/0808.2669 ). The practical problem is that imploding matter likely doesn't produce the full topology needed, and hence the black hole is useless. As a Fermi explanation black holes are essentially identical to quiet M-brains. It requires strong cultural convergence, plus that the value of communicating with an exterior spacetime always becomes less than the value of computation, plus that one can do a non-neglible amount of computation on the inside. Seth Lloyd's ultimate laptop paper analysed computing right at the Bekenstein edge; that would likely be pretty visible since it runs "hot".? http://arxiv.org/abs/quant-ph/9908043 Now, if you can make wormholes or more complex topologies things get real fun... but I suspect the result is a blob of smart Planck density stuff indistinguishable from Planckscale noise.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Tue Feb 24 15:26:13 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Tue, 24 Feb 2015 09:26:13 -0600 Subject: [ExI] beheadings etc. Message-ID: Help me out here. Although I am a psychologist I am at a loss. Randomly killing school children, adults, worshipers, beheading Christians and other foreigners or other sects of their religion ...... Just what message is being sent here? "Hey, look at us. We behead people, kidnap children and make them carry guns......." What do they want us to think? Is it "Look at how important these people are. How brave, how strong. Yes, these are the people I want to run my country. Also, their religion sounds really wonderful if it leads to these killings" Or just what? bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From danust2012 at gmail.com Tue Feb 24 16:36:29 2015 From: danust2012 at gmail.com (Dan) Date: Tue, 24 Feb 2015 08:36:29 -0800 Subject: [ExI] beheadings etc. In-Reply-To: References: Message-ID: <64F9EB92-346C-4CA7-B01E-1459C502B2F7@gmail.com> Start by making sure the evidence here is reliable rather than just accepting it as is. E.g., http://fair.org/blog/2015/02/20/top-10-bogus-isis-stories/ Regards, Dan See my Kindle books at: http://www.amazon.com/Dan-Ust/e/B00J6HPX8M/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Feb 24 16:51:45 2015 From: pharos at gmail.com (BillK) Date: Tue, 24 Feb 2015 16:51:45 +0000 Subject: [ExI] beheadings etc. In-Reply-To: References: Message-ID: On 24 February 2015 at 15:26, William Flynn Wallace wrote: > Help me out here. Although I am a psychologist I am at a loss. > > Randomly killing school children, adults, worshipers, beheading Christians > and other foreigners or other sects of their religion ...... > > Just what message is being sent here? > "Hey, look at us. We behead people, kidnap children and make them carry > guns......." > > What do they want us to think? Is it > "Look at how important these people are. How brave, how strong. Yes, these > are the people I want to run my country. Also, their religion sounds really > wonderful if it leads to these killings" > Or just what? > As Dan says, you have to be careful about propaganda intended to demonize enemies. The claims are probably exaggerated by all parties (for different reasons). But assuming that some atrocities have happened -------- The cause is probably fundamentalist religious maniacs. This applies (or used to apply) to groups attached to most other religions as well. There is one verse in the Quran which orders beheading in time of war. (Similar atrocity verses can be found in the Old Testament). Wikipedia also suggests - ISIL is using beheadings of locals to intimidate people, including their own soldiers, into obeying the dictates of a weak state. Beheadings of westerners are designed to strike back at the United Kingdom and the United States for military actions against ISIL that they have no other way of responding to. The violence might also act as a recruiting device for a certain type attracted to participate in such actions. ------------ Looking at it from the other side, when a Predator drone fires a missile the target is obliterated. But the target can include wives, children, friends. passers-by, etc. And those on the edge of the blast don't die immediately. They are terribly injured and burnt and die in agony. And they all have relatives and friends. That is why the drone campaign is described as creating more terrorists than it kills. BillK From johnkclark at gmail.com Tue Feb 24 16:53:33 2015 From: johnkclark at gmail.com (John Clark) Date: Tue, 24 Feb 2015 11:53:33 -0500 Subject: [ExI] beheadings etc. In-Reply-To: References: Message-ID: On Tue, Feb 24, 2015 William Flynn Wallace wrote: > Randomly killing school children, adults, worshipers, beheading > Christians and other foreigners or other sects of their religion ...... > Don't forget burning people alive. > Just what message is being sent here? > There is no God but Allah and Muhammad was his messenger . John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Tue Feb 24 17:22:41 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Tue, 24 Feb 2015 18:22:41 +0100 Subject: [ExI] beheadings etc. In-Reply-To: References: Message-ID: The message is simple: Please continue to buy our oil! Don't frack or something! Please, please! On Tue, Feb 24, 2015 at 5:53 PM, John Clark wrote: > > > On Tue, Feb 24, 2015 William Flynn Wallace wrote: > > > Randomly killing school children, adults, worshipers, beheading >> Christians and other foreigners or other sects of their religion ...... >> > > Don't forget burning people alive. > > > Just what message is being sent here? >> > > There is no God but Allah and Muhammad was his messenger . > > John K Clark > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnkclark at gmail.com Tue Feb 24 17:35:11 2015 From: johnkclark at gmail.com (John Clark) Date: Tue, 24 Feb 2015 12:35:11 -0500 Subject: [ExI] beheadings etc. In-Reply-To: References: Message-ID: On Tue, Feb 24, 2015 , BillK wrote: > > The cause is probably fundamentalist religious maniacs. > Probably?? Is this point really debatable? > There is one verse in the Quran which orders beheading in time of war. > There are other verses in the Quran that may help explain imbecilic Muslim behavior: ?[We] shall let them live awhile, and then shall drag them to the scourge of the Fire. Evil shall be their fate? (2:126). Believers, do not make friends with any but your own people. They will spare no pains to corrupt you. They desire nothing but your ruin. Their hatred is evident from what they utter with their mouths, but greater is the hatred which their breasts conceal? (3:118). Believers, if you yield to the infidels they will drag you back to unbelief and you will return headlong to perdition. . . . We will put terror into the hearts of the unbelievers. . . . The Fire shall be their home? (3:149?51) Fight against them until idolatry is no more and God?s religion reigns supreme. (2:190?93). Say to the unbelievers: ?You shall be overthrown and driven into Hell?an evil resting place! (3:12). Believers, do not make friends with any but your own people. They will spare no pains to corrupt you. They desire nothing but your ruin. Their hatred is evident from what they utter with their mouths, but greater is the hatred which their breasts conceal (3:118). You will find that the most implacable of men in their enmity to the faithful are the Jews and the pagans, and that the nearest in affection to them are those who say: ?We are Christians?? (5:80?82). Those that disbelieve and deny Our revelations shall become the inmates of Hell (5:86). > > Similar atrocity verses can be found in the Old Testament. Sam Harris had this to say about that in his book "The End Of Faith": "Yes, the Bible contains its own sadistic lunacy but the Qur?an does not contain anything like a Sermon on the Mount. Nor is it a vast and self-contradictory book like the Old Testament, in which whole sections (like Leviticus and Deuteronomy) can be easily ignored and forgotten. The result is a unified message of triumphalism, otherworldliness, and religious hatred that has become a problem for the entire world. And the world still waits for moderate Muslims to speak honestly about it." John K Clark > > Wikipedia also suggests - > ISIL is using beheadings of locals to intimidate people, including > their own soldiers, into obeying the dictates of a weak state. > Beheadings of westerners are designed to strike back at the United > Kingdom and the United States for military actions against ISIL that > they have no other way of responding to. > > The violence might also act as a recruiting device for a certain type > attracted to participate in such actions. > ------------ > > > Looking at it from the other side, when a Predator drone fires a > missile the target is obliterated. But the target can include wives, > children, friends. passers-by, etc. And those on the edge of the blast > don't die immediately. They are terribly injured and burnt and die in > agony. And they all have relatives and friends. > That is why the drone campaign is described as creating more > terrorists than it kills. > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitram.ohcan at gmail.com Tue Feb 24 17:59:39 2015 From: nitram.ohcan at gmail.com (=?UTF-8?Q?Nacho_Mart=C3=ADn?=) Date: Tue, 24 Feb 2015 18:59:39 +0100 Subject: [ExI] beheadings etc. In-Reply-To: References: Message-ID: I found this article by Gary Brecher very good on this topic: http://pando.com/2014/09/03/the-war-nerd-the-long-twisted-history-of-beheadings-as-propaganda/ Basically, he argues that people have been beheading enemies for propaganda reasons for a very long time. Until Victorian times, when western powers became so powerful, being able to slaughter thousands of natives without suffering casualties, and from that moment the victims started to inspire pity. "In fact, it?s only very recently that human cultures have learned to be coy about that fact. Before the Victorians came up with the brilliant notion of depicting conquest as a dreary but needful chore, war propaganda was an innocent, constant celebration of horrors committed by the victors, incised on the bodies of the losers." On Tue, Feb 24, 2015 at 4:26 PM, William Flynn Wallace wrote: > Help me out here. Although I am a psychologist I am at a loss. > > Randomly killing school children, adults, worshipers, beheading Christians > and other foreigners or other sects of their religion ...... > > Just what message is being sent here? > > "Hey, look at us. We behead people, kidnap children and make them carry > guns......." > > What do they want us to think? Is it > > "Look at how important these people are. How brave, how strong. Yes, > these are the people I want to run my country. Also, their religion sounds > really wonderful if it leads to these killings" > > Or just what? > > bill w > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Feb 25 07:37:52 2015 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 24 Feb 2015 23:37:52 -0800 Subject: [ExI] The Robot Big Bang In-Reply-To: <54E89B5B.9020405@t-online.de> References: <54E89B5B.9020405@t-online.de> Message-ID: Each of the items on this list can be argued to be true today. On Sat, Feb 21, 2015 at 6:51 AM, Carsten Zander wrote: > The Robot Big Bang will have the following characteristics: > - Robots will become very cheap and will spread rapidly around the world. > Relative to how expensive they used to be, and adjusted for inflation, robots are cheaper now than they have been. > - All simple activities can performed by robots. > "Simple" is another inexact term, but e.g. there are robots today that can play chess, and manufacture things if shown how. > - Most people will lose their jobs to robots. > Look no further than China and India, which together have over 35% of the world's population. See how many of their laborers have lost their jobs - for an expansive version of "lost", including "never had the chance to gain" - when automation could do it more cheaply. They may now be doing other jobs, even manufacturing jobs, but there are some manufacturing jobs that are lost to them. Repeat for other developing and third world countries until you get to over 50%, the dictionary definition of "most". > - All people will need a basic income. > Has this ever not been true, so long as there has been anything resembling civilization in which to have an income? Again, using broad definition that treats even a hunter-gatherer's, let alone a farmer's, food collection as "basic income". > - Robots themselves will be produced by robots. > There exist today many factories that employ robots to produce robots. > - Robots will transmit their skills, knowledge and abilities to other > robots. > One standard model of general-purpose industrial robot arm is set up so a human can guide it through what it needs to do, and then the arm can transmit what it has thus learned to other arms of its same model - by command of humans, but it is still the robot that performs the transmission. > - All people will be able to produce most things on their own with the > help of robots and 3-D printers (3-D printers are like robots). > Technically true today. As always there are varying levels of access to the tools: those in the poorest or (especially) most repressed parts of the world will never be allowed access to manufacture their own tools, and the reasons why are not what more and better robots would address. But for the rest of the world, there exist paths by which they can gain access to and use robots and 3D printers for manufacturing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From test at ssec.wisc.edu Wed Feb 25 07:30:26 2015 From: test at ssec.wisc.edu (Bill Hibbard) Date: Wed, 25 Feb 2015 01:30:26 -0600 (CST) Subject: [ExI] prayers for the souls of Japanese robot dogs Message-ID: http://www.japantimes.co.jp/news/2015/02/25/business/tech/afterlife-mans-best-robot-friend/ From anders at aleph.se Wed Feb 25 10:24:05 2015 From: anders at aleph.se (Anders Sandberg) Date: Wed, 25 Feb 2015 11:24:05 +0100 Subject: [ExI] beheadings etc. In-Reply-To: Message-ID: <3224798059-21597@secure.ericade.net> William Flynn Wallace , 24/2/2015 4:29 PM: Help me out here.? Although I am a psychologist I am at a loss.? Randomly killing school children, adults, worshipers, beheading Christians and other foreigners or other sects of their religion? ...... Just what message is being sent here? Why do you assume it is a *message*? Maybe the point actually is just to kill them? But actually, I do think you are correct: in our current era these events are largely intended as signals. But signals in themselves have no meaning, they need to be interpreted in the right context. The policeman who filmed a legal and public execution in Saudi Arabia got into trouble - there the idea is deterrence, but the explicit goriness of the act is not something officials like displayed to the world in another context. Meanwhile IS really want people to see their clips for a bundle of reasons: to terrorize enemies, to bolster their own in-group pride, to recruit, to scare away, to get attention, to obey some particular paragraph, to show that this cell is way more ruthless and cool than those other cells, just because everybody else is doing it... there are loads of reasons that I think occur at the same time. And many of the people involved have their own media theories, which may or may not make sense to anybody else, or work in reality. ? Basically, the IS is the Nazism of Islam. I suspect that they will in the long run have roughly the same effect on Islam as WWII had on Germany (from my hotel in Berlin I can almost make out the hole in the cityscape from the huge holocaust memorial). Basically, the militants are descending into barbarism that will tend to taint anything by association. That doesn't mean they can't do damage, although we should remember that it is not terrorism* that poses any real risk but actual warfare - terrorism is a loud, brutal signalling sideshow.? [* The terrorism definition I always use is the use of violence or the threat of violence against innocents/third parties to try to further some social or political change. It is the terror part that matters, not so much what kind of violence it is. ] Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Feb 25 16:25:39 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 25 Feb 2015 10:25:39 -0600 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: References: <54E34832.4020507@libero.it> Message-ID: Reply to all - Yes, I am a libertarian in the John Stuart Mill sense - my right to swing my fist stops where your nose starts. If it takes a government to stop that, then fine. I have no right to endanger others. Yes, speeding is not always dangerous - I should know. But what about the effect on other people if I crash and hurt or kill myself? No man is an island. We all have to obey laws that mostly restrict irresponsible other people) poorer reflexes, lower IQ and a lot more) and that is the price we have to pay. What do you think is fair? That better drivers have different speed limits? (I don't know what QALY is) Living life in a bubble? No one does. If you go out in the desert and kill yourself in some crazy accident you affect other people economically, emotionally and more. Restricting others' 'rights' to drive drunk, or without seat belts, or with Alzheimer's or with a stereo at 120dB, etc. is not contrary to libertarianism. If these people affected only themselves, then fine, off yourself any way you want. But when society has to clean up your mess (support for your wife children, towing your wreck, etc.) then it's not a question of individual rights, is it? You have no more right to hit my pocketbook and the taxes I pay than my nose. There are not many things that we do that do not affect others, and we have the responsibility to consider that. Calling it a nanny state is specious logic. bill w On Wed, Feb 18, 2015 at 6:43 PM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > > > On Tue, Feb 17, 2015 at 9:56 AM, William Flynn Wallace < > foozler83 at gmail.com> wrote: > >> >> In my younger days I hated it when a car was going faster than me. So I >> sped up and passed them. Yes, I have gotten numerous tickets but it never >> stopped me. Extremely irresponsible. >> > > ### So you say you are a poor driver. And then you want to restrict other > drivers, and take months of their lives from them at gunpoint, because > restricting speeds amounts to a significant QALY loss, in my case about 6 > months, since I drive a lot and obeying speed laws would force me to spend > an additional 6 months in my car over my life. > > Libertarian? > > Rafa? > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Feb 25 17:12:30 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 25 Feb 2015 11:12:30 -0600 Subject: [ExI] beheadings etc. In-Reply-To: <3224798059-21597@secure.ericade.net> References: <3224798059-21597@secure.ericade.net> Message-ID: > Just what message is being sent here? > > > Why do you assume it is a *message*? Maybe the point actually is just to > kill them? > > But actually, I do think you are correct: in our current era these events > are largely intended as signals. But signals in themselves have no meaning, > they need to be interpreted in the right context. The policeman who filmed > a legal and public execution in Saudi Arabia got into trouble - there the > idea is deterrence, but the explicit goriness of the act is not something > officials like displayed to the world in another context. Meanwhile IS > really want people to see their clips for a bundle of reasons: to terrorize > enemies, to bolster their own in-group pride, to recruit, to scare away, to > get attention, to obey some particular paragraph, to show that this cell is > way more ruthless and cool than those other cells, just because everybody > else is doing it... there are loads of reasons that I think occur at the > same time. And many of the people involved have their own media theories, > which may or may not make sense to anybody else, or work in reality. > > Basically, the IS is the Nazism of Islam. I suspect that they will in the > long run have roughly the same effect on Islam as WWII had on Germany (from > my hotel in Berlin I can almost make out the hole in the cityscape from the > huge holocaust memorial). Basically, the militants are descending into > barbarism that will tend to taint anything by association. That doesn't > mean they can't do damage, although we should remember that it is not > terrorism* that poses any real risk but actual warfare - terrorism is a > loud, brutal signalling sideshow. > > [* The terrorism definition I always use is the use of violence or the > threat of violence against innocents/third parties to try to further some > social or political change. It is the terror part that matters, not so much > what kind of violence it is. ] > > > Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford > University > ?Lots of good reason above. Fact it, though, that those are reasons that appeal to the hormone-fueled teenage mind? ?and also the mind of the religious extremist who is just mad at the world because things are not going their way. If they have no real hopes of taking over, and rationally they surely don't (irrationally of course anything goes), then they are just like teens trashing their rooms to spite their parents. I don't see the Nazi reference. The Nazis were very intelligent, patient (until later), organized?, smart (until they invaded Russia - think about it; they may still be in charge over there if they had left England and Russia alone and not started the 'final solution'). Some of the best minds in the world were in the upper Nazi hierarchy. Not all by any means were psychopaths. I can't see Boko Haram in that light. bill w > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Wed Feb 25 18:18:23 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Wed, 25 Feb 2015 12:18:23 -0600 Subject: [ExI] 'The Other Brain' Message-ID: Has anyone read this book or one similar? It appears to show that neurons may not be the most important parts of our brain, and that little money goes into education/research on glia. bill w -------------- next part -------------- An HTML attachment was scrubbed... URL: From connor_flexman at brown.edu Wed Feb 25 21:57:09 2015 From: connor_flexman at brown.edu (Flexman, Connor) Date: Wed, 25 Feb 2015 16:57:09 -0500 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: References: <54E34832.4020507@libero.it> Message-ID: On Wed, Feb 25, 2015 at 11:25 AM, William Flynn Wallace wrote: > Reply to all - Yes, I am a libertarian in the John Stuart Mill sense - my > right to swing my fist stops where your nose starts. If it takes a > government to stop that, then fine. I have no right to endanger others. > Yes, speeding is not always dangerous - I should know. But what about the > effect on other people if I crash and hurt or kill myself? No man is an > island. > You say you believe in "your right to swing your fist stops where another's nose starts," but then acknowledge that basically all actions have utility consequences for others and are not restricted to yourself. I agree with that principle fully, and think many ignore it far too often, but I'm pretty sure it's the libertarians who ignore it and you're basically at the opposite end here. Connor -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Feb 26 00:53:43 2015 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 25 Feb 2015 16:53:43 -0800 Subject: [ExI] Driverless cars for law enforcement In-Reply-To: References: <54E34832.4020507@libero.it> Message-ID: On Feb 25, 2015 8:28 AM, "William Flynn Wallace" wrote: > Restricting others' 'rights' to drive drunk, or without seat belts, or with Alzheimer's or with a stereo at 120dB, etc. is not contrary to libertarianism. If these people affected only themselves, then fine, off yourself any way you want. But when society has to clean up your mess (support for your wife children, towing your wreck, etc.) then it's not a question of individual rights, is it? You have no more right to hit my pocketbook and the taxes I pay than my nose. > > There are not many things that we do that do not affect others, and we have the responsibility to consider that. > > Calling it a nanny state is specious logic. This is true, but if you go down this line of thought much further... * If your actions lead you to be poor and desperate, you may find yourself apparently forced to take actions, such as theft, to feed yourself and your loved ones that are suboptimal for society's collective well-being. If this can easily be foreseen, then is it not in my interest to stop you from doing things, such as foregoing basic education or consuming recreational drugs in quantities that will significantly impair your long-term functionality, that will put you in that situation? * Your not working for my benefit means that I have a lower quality of life than I otherwise would. Granted, that might impair your quality of life, but what about actions that benefit you by a lesser total amount than actions that would benefit you and me? (Among the problems here is quantifying the benefit of unequal labor: some people perform better at certain tasks than others.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Feb 26 02:40:34 2015 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 25 Feb 2015 18:40:34 -0800 Subject: [ExI] beheadings etc Message-ID: On Wed, Feb 25, 2015 at 4:00 AM, Anders Sandberg wrote: >> William Flynn Wallace , 24/2/2015 4:29 PM: > >> Help me out here.? Although I am a psychologist I am at a loss.? > >> Randomly killing school children, adults, worshipers, beheading Christians and other foreigners or other sects of their religion? ...... > >> Just what message is being sent here? > > Why do you assume it is a *message*? Maybe the point actually is just to kill them? Think about it. Why do people have wars at all? What is the purpose? We share the war trait with chimps, but humans are not in war mode with everyone one else all the time like chimps are. What happened may half a million times since humans split with the chimps is that we ran into resource limits countless times due to population growth and then a patch of bad weather or something akin. The choices once or twice a generation was to fight the neighbors for their resources or starve. A fairly simple model shows that fighting is (on average) better for human genes. It's hard to think of a section of the world with higher population growth or poorer resources prospects than that section of the middle east. It is no wonder they are trying to kill all the other human groups. To go into war mode humans have to be infested with a meme set that dehumanizes the other group(s). IS certainly has that, but remember that the causality runs from the environmental signal to an amplified xenophobic meme. Before you think I am particularly picking on the Arabs, the pre WW II Germans were in a similar spot, and they had a similar response as did the Cambodians and the Rwandans to similar signals. snip > Basically, the IS is the Nazism of Islam. Evolved human behavior is mechanistic. Keith From anders at aleph.se Thu Feb 26 08:45:54 2015 From: anders at aleph.se (Anders Sandberg) Date: Thu, 26 Feb 2015 09:45:54 +0100 Subject: [ExI] 'The Other Brain' In-Reply-To: Message-ID: <3305861310-12062@secure.ericade.net> William Flynn Wallace , 25/2/2015 7:20 PM: Has anyone read this book or one similar? It appears to show that neurons may not be the most important parts of our brain, and that little money goes into education/research on glia. The point is regularly made (especially by glial researchers, for some reason). In PubMed, 586,690 papers mention 'neuron' and just 89,083 mentioning 'glia'.? But there are good reasons for this: neurons react *fast* - in the millisecond range - while glia react over the span of many seconds, and in a fairly diffuse manner. Neurons are what is responsible for ongoing and specific perception and action. Sure, there are likely important things to be discovered in the glia: we have found some are acting as stem cells, and their modulation of the chemical environment is nontrivial.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Thu Feb 26 14:50:48 2015 From: anders at aleph.se (Anders Sandberg) Date: Thu, 26 Feb 2015 15:50:48 +0100 Subject: [ExI] beheadings etc. In-Reply-To: Message-ID: <3327281444-3007@secure.ericade.net> William Flynn Wallace , 25/2/2015 6:14 PM: ?Lots of good reason above.? Fact it, though, that those are reasons that appeal to the hormone-fueled teenage mind?? ?and also the mind of the religious extremist who is just mad at the world because things are not going their way.? If they have no real hopes of taking over, and rationally they surely don't (irrationally of course anything goes), then they are just like teens trashing their rooms to spite their parents. The US defence planning community got a nasty surprise when they realized that some of their post-911 adversaries were "non-Clausewitzian". They had always relied on the idea that the enemy used rational means in order to achieve their goals, but now encountered enemies who might do an act that had zero probability of "winning" in an objective sense, yet made sense as a symbolic action. This is not just angry lashing out, but performing sacred acts in order to fulfil prophecy. However, one can channel lashing out and desperation into this sacred mode.? (After all, looking at the GSS in the US, it is clear that fundamentalism is more popular among the worst off, likely to provide a modicum of meaning and comfort: http://www.aleph.se/andart/archives/2007/01/criminal_because_of_god_or_godly_because_of_crime.html ) I don't see the Nazi reference.? The Nazis were very intelligent, patient (until later), organized?, smart (until they invaded Russia - think about it; they may still be in charge over there if they had left England and Russia alone and not started the 'final solution').? Some of the best minds in the world were in the upper Nazi hierarchy.? Not all by any means were psychopaths.? I can't see Boko Haram in that light. Don't read too much into the comparison, I was mostly talking about the memetic impact of the culture it is embedded in: IS is making Islam embarrassing internally.? You make a mistake in ascribing Boko Haram to sociopathy: sociopaths are rare, you cannot build a large organisation from them (and they make lousy members). Most Boko Haram members are just like you and me. That is of course the real horror and lesson of BM, IS or the Nazis: most members were totally ordinary people swept into a pathological culture. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From foozler83 at gmail.com Thu Feb 26 16:35:00 2015 From: foozler83 at gmail.com (William Flynn Wallace) Date: Thu, 26 Feb 2015 10:35:00 -0600 Subject: [ExI] 'The Other Brain' In-Reply-To: <3305861310-12062@secure.ericade.net> References: <3305861310-12062@secure.ericade.net> Message-ID: Anders, It appears from The Other Brain that glia, in fact, control neurons, and often at distant points. I could go on. But I think that, according to the author, a sort of denial situation exists, wherein those in the field tend to put all the emphasis on neurons and actually deny the roles of glia, assigning them only support services. If the book is correct it enormously complicates understanding the brain and doing research on it because glia do not emit nice recordable electrical impulses. After reading this book, extremely clear, even to a person whose last info in this field was in physio psych in 1965, I am convinced that glia are more important and are the basis for our unconscious mind. I highly recommend it to you. I have not found anything like it. The science is detailed clearly, though of course I cannot refute it with my background. It seems about as far from making wild claims as possible. Bill W On Thu, Feb 26, 2015 at 2:45 AM, Anders Sandberg wrote: > William Flynn Wallace , 25/2/2015 7:20 PM: > > Has anyone read this book or one similar? > > It appears to show that neurons may not be the most important parts of our > brain, and that little money goes into education/research on glia. > > > The point is regularly made (especially by glial researchers, for some > reason). In PubMed, 586,690 papers mention 'neuron' and just 89,083 > mentioning 'glia'. > > But there are good reasons for this: neurons react *fast* - in the > millisecond range - while glia react over the span of many seconds, and in > a fairly diffuse manner. Neurons are what is responsible for ongoing and > specific perception and action. Sure, there are likely important things to > be discovered in the glia: we have found some are acting as stem cells, and > their modulation of the chemical environment is nontrivial. > > > > Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford > University > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Feb 26 18:39:28 2015 From: pharos at gmail.com (BillK) Date: Thu, 26 Feb 2015 18:39:28 +0000 Subject: [ExI] Human head transplant in two years??? Message-ID: The idea was first proposed in 2013 by Sergio Canavero of the Turin Advanced Neuromodulation Group in Italy. He wants to use the surgery to extend the lives of people whose muscles and nerves have degenerated or whose organs are riddled with cancer. Now he claims the major hurdles, such as fusing the spinal cord and preventing the body's immune system from rejecting the head, are surmountable, and the surgery could be ready as early as 2017. ---------------------- I've just had a word with my body and it says that it feels fine, but a new head would be nice. BillK From anders at aleph.se Thu Feb 26 21:07:56 2015 From: anders at aleph.se (Anders Sandberg) Date: Thu, 26 Feb 2015 22:07:56 +0100 Subject: [ExI] 'The Other Brain' In-Reply-To: Message-ID: <3350684684-21300@secure.ericade.net> William Flynn Wallace , 26/2/2015 5:37 PM: But I think that, according to the author, a sort of denial situation exists, wherein those in the field tend to put all the emphasis on neurons and actually deny the roles of glia, assigning them only support services.? If the book is correct it enormously complicates understanding the brain and doing research on it because glia do not emit nice recordable electrical impulses. Have you read any modern neuroscience textbook like Kandel, Schwarz, Jessop? Remember, you are basing your judgement on a somewhat partisan book.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 26 21:09:27 2015 From: spike66 at att.net (spike) Date: Thu, 26 Feb 2015 13:09:27 -0800 Subject: [ExI] Human head transplant in two years??? In-Reply-To: References: Message-ID: <01fa01d05208$81e09720$85a1c560$@att.net> -----Original Message----- From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK >... Now he claims the major hurdles, such as fusing the spinal cord and preventing the body's immune system from rejecting the head, are surmountable, and the surgery could be ready as early as 2017. ---------------------- I've just had a word with my body and it says that it feels fine, but a new head would be nice. BillK _______________________________________________ They wouldn't even necessarily need to work out all the organ rejection problems. Scenario: a person with a loooootta lotta medical problems, another person with mashed frontal lobes but intact brainstem, such as motorcycle crash victims who don't wear helmets. Swap the heads, don't bother with the spinal columns, keep the good head and the good body together only long enough to do extensive painful surgeries on the body with the mashed head. Then when the healing is far enough along, swap them back, inject stem cells or whatever tech they propose to fuse the spinal cords and no worries with rejection because body and head were only temporarily separated. spike From tara at taramayastales.com Thu Feb 26 21:12:26 2015 From: tara at taramayastales.com (Tara Maya) Date: Thu, 26 Feb 2015 13:12:26 -0800 Subject: [ExI] beheadings etc In-Reply-To: References: Message-ID: Isn?t that the message? "If you disagree with us, we will kill you.? It seems fairly straight-forward. It?s also a successful way to assert power, as has been repeatedly proven throughout history, including recent history. There?s no mystery here. Tara Maya Blog | Twitter | Facebook | Amazon | Goodreads > On Feb 25, 2015, at 6:40 PM, Keith Henson wrote: > > On Wed, Feb 25, 2015 at 4:00 AM, Anders Sandberg wrote: > >>> William Flynn Wallace , 24/2/2015 4:29 PM: >> >>> Help me out here.? Although I am a psychologist I am at a loss.? >> >>> Randomly killing school children, adults, worshipers, beheading Christians and other foreigners or other sects of their religion? ...... >> >>> Just what message is being sent here? >> >> Why do you assume it is a *message*? Maybe the point actually is just to kill them? From possiblepaths2050 at gmail.com Fri Feb 27 01:14:29 2015 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 26 Feb 2015 18:14:29 -0700 Subject: [ExI] Transhumanist/sf themed tabletop games Message-ID: I was perusing the Noble Knight Games website, and came across this innovative new transhumanist themed rpg.... I would love to play this with Anders, Max and Spike, as well as many other people on this list! : ) http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147556616_A_InventoryID_E_2148134053_A_ProductLineID_E_2137431177_A_ManufacturerID_E_2145087099_A_CategoryID_E_12_A_GenreID_E_ "Posthuman Pathways is a one-shot roleplaying game that focusses exploring what individuals are willing to sacrifice in the name of progress. It explores the themes of transhumanism, seen through the eyes of the player characters during their lifetimes. During play, you explore how the world is transforming and what impacts that has on the people within it. You can play this game multiple times to explore different potential futures. This game is both diceless and GM-less, designed for three people to play over a single 3-4 hour session." I think of Eclipse Phase as one of the two best truly transhumanist rpg's out there... Though a friend thinks it is "too much fantasy, and not hardcore sf to the extent of Transhuman Space..." "Eclipse Phase is a post-apocalyptic game of conspiracy and horror. Humanity is enhanced and improved, but also battered and bitterly divided. Technology allows the re-shaping of bodies and minds, but they also create opportunities for oppression and put the capabilities for mass destruction in the hands of everyone. And other threats lurk in the devastated habitats of the Fall, dangers both familiar and alien. In this harsh setting, the players participate in a cross-faction conspiracy that seeks to protect transhumanity from threats both internal and external. Along the way, they may find themselves hunting for prized technology in a gutted habitat falling from orbit, risking the hellish landscapes of a ruined Earth, or following the trail of a terrorist through militarized stations and isolationist habitats. Or they may find themselves stepping through a Pandora Gate, a wormhole to distant stars and the alien secrets beyond?" http://www.nobleknight.com/ViewProducts.asp_Q_ProductLineID_E_2137424581_A_ManufacturerID_E_2145085760_A_CategoryID_E_12_A_GenreID_E_ The first Lawnmower Man film is one of my all-time favorite movies! I think it is superior to Transcendence and Lucy... I had no idea a rpg had been based on it... http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_-1672160027_A_InventoryID_E_2147661866_A_ProductLineID_E_2137419113_A_ManufacturerID_E_89_A_CategoryID_E_12_A_GenreID_E_ Mindjammer is an updated take on space opera, and won an Ennie game designer's award! "Grab your blaster, thoughtcast your orders to the starship sentience, and fire up the planning engines - come and Defend the light of humanity's greatest civilization as it spreads to the stars! The ENnie Award-winning transhuman science-fiction RPG setting returns, in a new edition updated and massively expanded for the Fate Core rules. Mindjammer is an action-packed tabletop roleplaying game about heroic adventurers in the galaxy of the far future, filled with virtual realities, sentient starships, realistic aliens, and mysterious worlds. Using the popular and award-winning Fate Core rules, Mindjammer lets you play hardened mercs, cunning traders, steely-nerved pilots, intrigue-filled spies and culture agents, aliens, divergent hominids, artificial life forms, and even sentient starships. It's a standalone game with everything you need to play, including innovative new rules for alien life, planets, and star systems, organizations, culture conflict, hypertech, starmaps, background material, virtual realities, techno-psionic powers, and much more. It's the Second Age of Space - the transhuman adventure is just beginning! Made in the USA." http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147554972_A_InventoryID_E_2148200841_A_ProductLineID_E_2137431586_A_ManufacturerID_E_2145087094_A_CategoryID_E_12_A_GenreID_E_ My favorite transhumanist themed rpg, Transhuman Space... : ) I've read that David L. Pulver, it's creator, has been having financial struggles... I suppose Steve Jackson is the exception to the rule, about game designers living lives of poverty. "It's the year 2100. Humans have colonized the solar system. China and America struggle for control of Mars. The Royal Navy patrols the asteroid belt. Nanotechnology has transformed life on Earth forever, and gene-enhanced humans share the world with artificial intelligences and robotic cybershells. Our solar system has become a setting as exciting and alien as any interstellar empire. Pirate spaceships hijacking black holes . . . sentient computers and artificial "bioroids" demanding human rights . . . nanotechnology and mind control . . . Transhuman Space is cutting-edge science fiction adventure that begins where cyberpunk ends. This new Powered by GURPS line was created by David L. Pulver and illustrated by Christopher Shy. The core book, Transhuman Space, opens with close to a hundred pages of world and background material. The hardback edition includes a customized GURPS Lite - no other books are required to use it, although the GURPS Basic Set and Compendium I are recommended for GMs." http://www.nobleknight.com/productdetailsearch.asp_Q_ProductID_E_16512_A_InventoryID_E_0 I am fascinated by post-apocalyptic settings, and this is one which reminds me of Poul Anderson's classic sf novel, Brainwave.... "Mankind has changed. Between the Mutts, Roms, Rippers, and Rejects, humanity is but a shadow of its former glory, especially now that the animals have developed sentience. Prepare yourself for the new face of the apocalypse. This expanded game compiles the EarthAD.2 core book, companion Enhancement Pack, and submarine pack" Another post-apocalyptic rpg, with this one involving deadly asteroid strikes... It has overtones of David Gerrold's "War Against the Chthorr" series... "Degenesis is the story of mankind?s struggle in the wake of Earth?s greatest catastrophe: a rain of massive asteroids. Europe and Africa have been cut off from the other continents and battle against each other for control of the known world. In Europe, the people are finally emerging from a dark age that spanned half a millennium, whereas Africa has become complacent and corrupt after centuries of wealth and splendor. Meanwhile, a new threat to mankind has emerged. With the asteroids came a new and sinister life form which poisoned the earth and its creatures." http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147477659_A_InventoryID_E_0_A_ProductLineID_E_2137426308_A_ManufacturerID_E_2145085760_A_CategoryID_E_12_A_GenreID_E_ This science fantasy rpg came out in 1981, during the Star Wars craze, and was truly crazy fun and over the top, with game mechanisms where were ahead of it's time... I have lost my copy of the game and for the last three years have been on various waiting lists, in an attempt to get a copy. If I had the money to burn, I would hire a creative team to write and publish the expansion modules that were never made, but discussed in the basic set manual. Of the two rpg creators, one has died, but the other is still living. "THE FIRST STEP in your journey to the outer limits of your imagination! This introductory module fuses the worlds of science and fantasy to form a space/fantasy role playing game which is easily learned by beginners, and gives the experience gamer future worlds rich in technology, equipment, and danger. STAR ROVERS is a game system of interlocking modules which allow you to create whole universes to explore, and supplies you with everything necessary to generate realistic and well-rounded player characters. It is also an ideal sourcebook for ANY space role playing adventures. Complete rules allow you to quickly create deadly firefights, narrow escapes, and bizarre encounters in all conceivable times and places among the stars. MODULE ONE CONTAINS over 130 pages of fully illustrated, easily-referred-to rules, Starship Floorplans, Quick Reference Sheets, 5 Dice, a Mapboard of Moondog Maude's Cantina, Encounter Charts, a Time Line Chart of all ages and technologies, and everything you need to create whole star systems and explore them. Journey with STAR ROVERS into the future and learn the incredible secrets that await you!" http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147360622_A_InventoryID_E_0_A_ProductLineID_E_2137420019_A_ManufacturerID_E_1952643922_A_CategoryID_E_16_A_GenreID_E_ This could perhaps be viewed as not a transhuman, but posthuman, rpg... A particular quote by Arthur C. Clarke comes to mind! "Solipsist is a diceless role-playing game for 2 or more players. Stylishly presented with a luscious green cover it features the artwork of author David Donachie as well as Gregor Hutton (Best Friends), Kurt Taylor and Frieda Van Raevels. Solipsists are people who think so strongly and individually, that they can literally change reality, teasing out the fabric of the consensus and changing it. In this game you and your friends play a group of balanced Solipsists, struggling to fulfil your grandiose dreams, retain your desperate grip on reality, and fight the un-making of the Shadows before they can end the world for good." *?Imagine that someone offered you a door to an alternative world in which everything you ever dreamed of was fact. Would you go??* http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147382681_A_InventoryID_E_0_A_ProductLineID_E_2137423016_A_ManufacturerID_E_2145084493_A_CategoryID_E_12_A_GenreID_E_ Now for the ultimate horror scenario rpg, life as a highschool student! *"Alma Mater* is a role-playing game in which players may choose to be a Jock, Cheerleader, Tough, Brain, Criminal, Average, or Loser. Their challenge is to successfully live as a teenager through four years of modern-day American high school. The game rules cover nearly every aspect of teenage life, including sports, social situations, fights, and hot rods. In *Alma Mater*, you can act out your wildest fantasies and do anything that you would normally consider exceptionally foolish, suicidal, or down-right crazy." /ProductDetail.asp_Q_ProductID_E_1978502872_A_InventoryID_E_2148174735 _A_ProductLineID_E_362270611_A_ManufacturerID_E_-1109488870_A_CategoryID_E_12_A_GenreID_E_ Sexy alien rock musicians come to Earth to help us chill out and learn to really love life! If rpg's had been around during the sixties, and embraced by hippies, this would have probably been very popular... "In 2073? the alien Starchildren arrived on Earth? following lifetimes of rocking among the stars. Expecting to find a musical utopia here on Rock's home planet? they instead land upon a quiet? fearful world? where music has become a controlled substance and free speech is a thing of the past. Undaunted? the Starchildren secretly walk among the Resistance? sowing the seeds of the rebellion to come! Starchildren: Velvet Generation provides a breakneck Rock & Roll setting and a fast? innovative card-based system. With full rules for live performances? strange alien abilities? and the occasional ballroom blitz? this lavishly illustrated hardcover rulebook contains everything players need to save the world - one Rock concert at a time!" http://www.nobleknight.com/ViewProducts.asp_Q_ProductLineID_E_893438896_A_ManufacturerID_E_-525001013_A_CategoryID_E_12_A_GenreID_E_ A fun non-fiction work about the gamer subculture, and who inhabits it.... "The funniest gaming book of the year! When Jonny Nexus (The Jonny Nexus Experience and Game Night) and James Desborough (The Munchkin's Guide to Power Gaming) come together to cast an irreverant eye on gamers, expect weirdness and much hilarity! "Sex, Dice and Gamer Chicks" casts an introspective and irreverent eye onto gamers themselves. Just what drives a rules lawyer? What are the secrets to fame, success and riches as an all-star games designer? Are female gamers weird? These questions and many others are ignored as Sex, Dice and Gamer Chicks tweaks and teases apart the very fabric of gaming and those who call themselves gamers. Written by James Desborough and Jonny Nexus, both World-class Gaming Personalities themselves, Sex, Dice and Gamer Chicks will have you in stitches from beginning to end! Essential reading for gamers, gamer spouses, gamer family members and gamer widows provided they have a sense of humor." http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147432494_A_InventoryID_E_0_A_ProductLineID_E_2137424277_A_ManufacturerID_E_617212936_A_CategoryID_E_12_A_GenreID_E_ This is actually a fantasy boardgame, based on my beloved John Carter of Mars novel series, by Edgar Rice Burroughs. I remember lovingly looking it over when I was around twelve, and eventually I plan on buying it. It can be played solitaire! "This game simulates the legendary world of Edgar Rice Burroughs, Barsoom. Players are involved in Barsoomian epics in the role of one hero or one of his companions, and strive to perform as nobly as those heroes of the past. There are evil villains to be met, Martian wildlife to be tamed, great treasures unique to Barsoom to be gained and lovely lasses to be wooed and won. Players do not strive for victory points, but for greater glory, high romance and their attendant acclaim." http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_15273_A_InventoryID_E_2148092625_A_ProductLineID_E_2137430485_A_ManufacturerID_E_37_A_CategoryID_E_12_A_GenreID_E_ This is the mother of all super-detailed science fiction galactic empire wargames! A game can sometimes take up to around 12 hours... "Twilight Imperium continues to be the board game that all other science fiction games are measured against. A big box Fantasy Flight game, Twilight Imperium can last a staggering 11 hours, but players will be enthralled throughout the game. You?ll take control of an alien race with their own special abilities, technologies and ships and must trade, negotiate, develop and battle your way to galactic dominance." http://en.wikipedia.org/wiki/Twilight_Imperium A major award winner? "Eclipse is the latest hot 4x (explore, expand, exploit, and exterminate) science fiction board game. It has a streamlined, almost Euro-game-style approach to the 4x genre that focuses on the explore, expand and exterminate aspects of the genre. Great designs and graphics keep Eclipse streamlined and fast, keeping everyone involved through the entire game." http://en.wikipedia.org/wiki/Eclipse_%28board_game%29 The decades popular classic.... "Cosmic Encounter has been reprinted multiple times since its initial release, with various publishers editing the game slightly. Through the years, the highly popular core mechanics of racial abilities and a simple combat system has stayed the same. Cosmic Encounter is a fast, highly interactive game that provides players with few, but significant, decisions as it plays out." http://en.wikipedia.org/wiki/Cosmic_Encounter I hope I am not the only one who has enjoyed this walk down memory lane. I look forward to any replies about these games, and those not on my list... One day we should have an Extropy List Gaming Con! Lol.... John : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Feb 27 01:33:06 2015 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 26 Feb 2015 18:33:06 -0700 Subject: [ExI] beheadings etc In-Reply-To: References: Message-ID: This discussion reminds me of the Dune series by Frank Herbert, and how his message was to beware of religious fanaticism and messiahs. I remember how even Paul Atreides knew that by unleashing his priests and other fanatical warriors on the galaxy, that their would be untold massacres and war crimes. I wish Frank Herbert were still around (he has been gone for around 30 years) to tell us what he thinks of ISIS, and the over-all situation in the Middle East. John On Thu, Feb 26, 2015 at 2:12 PM, Tara Maya wrote: > Isn?t that the message? "If you disagree with us, we will kill you.? It > seems fairly straight-forward. It?s also a successful way to assert power, > as has been repeatedly proven throughout history, including recent history. > There?s no mystery here. > > Tara Maya > Blog | Twitter | Facebook | Amazon | Goodreads > > > > > On Feb 25, 2015, at 6:40 PM, Keith Henson > wrote: > > > > On Wed, Feb 25, 2015 at 4:00 AM, Anders Sandberg > wrote: > > > >>> William Flynn Wallace , 24/2/2015 4:29 PM: > >> > >>> Help me out here.? Although I am a psychologist I am at a loss.? > >> > >>> Randomly killing school children, adults, worshipers, beheading > Christians and other foreigners or other sects of their religion? ...... > >> > >>> Just what message is being sent here? > >> > >> Why do you assume it is a *message*? Maybe the point actually is just > to kill them? > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Fri Feb 27 03:04:45 2015 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 26 Feb 2015 22:04:45 -0500 Subject: [ExI] Color In-Reply-To: References: Message-ID: Recent discussion of effing red made me think to share this here. http://www.buzzfeed.com/catesish/help-am-i-going-insane-its-definitely-blue http://www.buzzfeed.com/claudiakoerner/this-might-explain-why-that-dress-looks-blue-and-black-and-w What's going on with this? I assumed it is an elaborate hoax until my wife said it reversed too. Parallel worlds like some Twilight Zone episode could be an acceptable answer even if highly improbable. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Fri Feb 27 03:32:25 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Thu, 26 Feb 2015 20:32:25 -0700 Subject: [ExI] Color In-Reply-To: References: Message-ID: <54EFE549.6050701@canonizer.com> In my opinion, this kind of stuff just confuses people, and leads them away from what is important. What is important is a simple elemental redness, and a simple elemental greyness, and the qualitative difference between them, and that fact the zombie physics has no account for that. What is that difference, when is someone else experiencing the same, or inverted, and what are the neural correlates of that obvious difference, i.e. how do you detect it one vs the other. Brent On 2/26/2015 8:04 PM, Mike Dougherty wrote: > > Recent discussion of effing red made me think to share this here. > > http://www.buzzfeed.com/catesish/help-am-i-going-insane-its-definitely-blue > > http://www.buzzfeed.com/claudiakoerner/this-might-explain-why-that-dress-looks-blue-and-black-and-w > > What's going on with this? I assumed it is an elaborate hoax until my > wife said it reversed too. Parallel worlds like some Twilight Zone > episode could be an acceptable answer even if highly improbable. :) > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From veronese at uab.edu Fri Feb 27 03:34:10 2015 From: veronese at uab.edu (Keith Veronese) Date: Thu, 26 Feb 2015 21:34:10 -0600 Subject: [ExI] Transhumanist/sf themed tabletop games In-Reply-To: References: Message-ID: I'm a tabletop RPG nerd - definitely picking up Posthuman Pathways. Keith Veronese, Ph.D. veronesepk at gmail.com 205.907.3602 keithveronese.com On Thu, Feb 26, 2015 at 7:14 PM, John Grigg wrote: > I was perusing the Noble Knight Games website, and came across this > innovative new transhumanist themed rpg.... I would love to play this with > Anders, Max and Spike, as well as many other people on this list! : ) > > > http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147556616_A_InventoryID_E_2148134053_A_ProductLineID_E_2137431177_A_ManufacturerID_E_2145087099_A_CategoryID_E_12_A_GenreID_E_ > > > > "Posthuman Pathways is a one-shot roleplaying game that focusses exploring > what individuals are willing to sacrifice in the name of progress. It > explores the themes of transhumanism, seen through the eyes of the player > characters during their lifetimes. During play, you explore how the world > is transforming and what impacts that has on the people within it. You can > play this game multiple times to explore different potential futures. This > game is both diceless and GM-less, designed for three people to play over a > single 3-4 hour session." > > > > I think of Eclipse Phase as one of the two best truly transhumanist rpg's > out there... Though a friend thinks it is "too much fantasy, and not > hardcore sf to the extent of Transhuman Space..." > > > "Eclipse Phase is a post-apocalyptic game of conspiracy and horror. > Humanity is enhanced and improved, but also battered and bitterly divided. > Technology allows the re-shaping of bodies and minds, but they also create > opportunities for oppression and put the capabilities for mass destruction > in the hands of everyone. And other threats lurk in the devastated habitats > of the Fall, dangers both familiar and alien. > > In this harsh setting, the players participate in a cross-faction > conspiracy that seeks to protect transhumanity from threats both internal > and external. Along the way, they may find themselves hunting for prized > technology in a gutted habitat falling from orbit, risking the hellish > landscapes of a ruined Earth, or following the trail of a terrorist through > militarized stations and isolationist habitats. Or they may find themselves > stepping through a Pandora Gate, a wormhole to distant stars and the alien > secrets beyond?" > > > > > > > http://www.nobleknight.com/ViewProducts.asp_Q_ProductLineID_E_2137424581_A_ManufacturerID_E_2145085760_A_CategoryID_E_12_A_GenreID_E_ > > The first Lawnmower Man film is one of my all-time favorite movies! I > think it is superior to Transcendence and Lucy... I had no idea a rpg had > been based on it... > > > > http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_-1672160027_A_InventoryID_E_2147661866_A_ProductLineID_E_2137419113_A_ManufacturerID_E_89_A_CategoryID_E_12_A_GenreID_E_ > > Mindjammer is an updated take on space opera, and won an Ennie game > designer's award! > > "Grab your blaster, thoughtcast your orders to the starship sentience, and > fire up the planning engines - come and Defend the light of humanity's > greatest civilization as it spreads to the stars! The ENnie Award-winning > transhuman science-fiction RPG setting returns, in a new edition updated > and massively expanded for the Fate Core rules. Mindjammer is an > action-packed tabletop roleplaying game about heroic adventurers in the > galaxy of the far future, filled with virtual realities, sentient > starships, realistic aliens, and mysterious worlds. Using the popular and > award-winning Fate Core rules, Mindjammer lets you play hardened mercs, > cunning traders, steely-nerved pilots, intrigue-filled spies and culture > agents, aliens, divergent hominids, artificial life forms, and even > sentient starships. It's a standalone game with everything you need to > play, including innovative new rules for alien life, planets, and star > systems, organizations, culture conflict, hypertech, starmaps, background > material, virtual realities, techno-psionic powers, and much more. It's the > Second Age of Space - the transhuman adventure is just beginning! Made in > the USA." > > > > http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147554972_A_InventoryID_E_2148200841_A_ProductLineID_E_2137431586_A_ManufacturerID_E_2145087094_A_CategoryID_E_12_A_GenreID_E_ > > My favorite transhumanist themed rpg, Transhuman Space... : ) I've read > that David L. Pulver, it's creator, has been having financial struggles... > I suppose Steve Jackson is the exception to the rule, about game designers > living lives of poverty. > > > "It's the year 2100. Humans have colonized the solar system. China and > America struggle for control of Mars. The Royal Navy patrols the asteroid > belt. Nanotechnology has transformed life on Earth forever, and > gene-enhanced humans share the world with artificial intelligences and > robotic cybershells. Our solar system has become a setting as exciting and > alien as any interstellar empire. Pirate spaceships hijacking black holes . > . . sentient computers and artificial "bioroids" demanding human rights . . > . nanotechnology and mind control . . . Transhuman Space is cutting-edge > science fiction adventure that begins where cyberpunk ends. > > This new Powered by GURPS line was created by David L. Pulver and > illustrated by Christopher Shy. The core book, Transhuman Space, opens with > close to a hundred pages of world and background material. The hardback > edition includes a customized GURPS Lite - no other books are required to > use it, although the GURPS Basic Set and Compendium I are recommended for > GMs." > > > http://www.nobleknight.com/productdetailsearch.asp_Q_ProductID_E_16512_A_InventoryID_E_0 > > I am fascinated by post-apocalyptic settings, and this is one which > reminds me of Poul Anderson's classic sf novel, Brainwave.... > > > "Mankind has changed. Between the Mutts, Roms, Rippers, and Rejects, > humanity is but a shadow of its former glory, especially now that the > animals have developed sentience. Prepare yourself for the new face of the > apocalypse. > > This expanded game compiles the EarthAD.2 core book, companion Enhancement > Pack, and submarine pack" > > > > Another post-apocalyptic rpg, with this one involving deadly asteroid > strikes... It has overtones of David Gerrold's "War Against the Chthorr" > series... > > > "Degenesis is the story of mankind?s struggle in the wake of Earth?s > greatest catastrophe: a rain of massive asteroids. Europe and Africa have > been cut off from the other continents and battle against each other for > control of the known world. In Europe, the people are finally emerging from > a dark age that spanned half a millennium, whereas Africa has become > complacent and corrupt after centuries of wealth and splendor. Meanwhile, a > new threat to mankind has emerged. With the asteroids came a new and > sinister life form which poisoned the earth and its creatures." > > > > http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147477659_A_InventoryID_E_0_A_ProductLineID_E_2137426308_A_ManufacturerID_E_2145085760_A_CategoryID_E_12_A_GenreID_E_ > > This science fantasy rpg came out in 1981, during the Star Wars craze, and > was truly crazy fun and over the top, with game mechanisms where were ahead > of it's time... I have lost my copy of the game and for the last three > years have been on various waiting lists, in an attempt to get a copy. If > I had the money to burn, I would hire a creative team to write and publish > the expansion modules that were never made, but discussed in the basic set > manual. Of the two rpg creators, one has died, but the other is still > living. > > > "THE FIRST STEP in your journey to the outer limits of your imagination! > This introductory module fuses the worlds of science and fantasy to form a > space/fantasy role playing game which is easily learned by beginners, and > gives the experience gamer future worlds rich in technology, equipment, and > danger. > > STAR ROVERS is a game system of interlocking modules which allow you to > create whole universes to explore, and supplies you with everything > necessary to generate realistic and well-rounded player characters. It is > also an ideal sourcebook for ANY space role playing adventures. Complete > rules allow you to quickly create deadly firefights, narrow escapes, and > bizarre encounters in all conceivable times and places among the stars. > > MODULE ONE CONTAINS over 130 pages of fully illustrated, > easily-referred-to rules, Starship Floorplans, Quick Reference Sheets, 5 > Dice, a Mapboard of Moondog Maude's Cantina, Encounter Charts, a Time Line > Chart of all ages and technologies, and everything you need to create whole > star systems and explore them. Journey with STAR ROVERS into the future and > learn the incredible secrets that await you!" > > > http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147360622_A_InventoryID_E_0_A_ProductLineID_E_2137420019_A_ManufacturerID_E_1952643922_A_CategoryID_E_16_A_GenreID_E_ > > This could perhaps be viewed as not a transhuman, but posthuman, rpg... A > particular quote by Arthur C. Clarke comes to mind! > > > "Solipsist is a diceless role-playing game for 2 or more players. > Stylishly presented with a luscious green cover it features the artwork of > author David Donachie as well as Gregor Hutton (Best Friends), Kurt Taylor > and Frieda Van Raevels. > > Solipsists are people who think so strongly and individually, that they > can literally change reality, teasing out the fabric of the consensus and > changing it. > > In this game you and your friends play a group of balanced Solipsists, > struggling to fulfil your grandiose dreams, retain your desperate grip on > reality, and fight the un-making of the Shadows before they can end the > world for good." > > > *?Imagine that someone offered you a door to an alternative world in which > everything you ever dreamed of was fact. Would you go??* > > > http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147382681_A_InventoryID_E_0_A_ProductLineID_E_2137423016_A_ManufacturerID_E_2145084493_A_CategoryID_E_12_A_GenreID_E_ > > Now for the ultimate horror scenario rpg, life as a highschool student! > > *"Alma Mater* is a role-playing game in which players may choose to be a > Jock, Cheerleader, Tough, Brain, Criminal, Average, or Loser. Their > challenge is to successfully live as a teenager through four years of > modern-day American high school. The game rules cover nearly every aspect > of teenage life, including sports, social situations, fights, and hot rods. > > In *Alma Mater*, you can act out your wildest fantasies and do anything > that you would normally consider exceptionally foolish, suicidal, or > down-right crazy." > > /ProductDetail.asp_Q_ProductID_E_1978502872_A_InventoryID_E_2148174735 > _A_ProductLineID_E_362270611_A_ManufacturerID_E_-1109488870_A_CategoryID_E_12_A_GenreID_E_ > > Sexy alien rock musicians come to Earth to help us chill out and learn to > really love life! If rpg's had been around during the sixties, and > embraced by hippies, this would have probably been very popular... > > > > "In 2073? the alien Starchildren arrived on Earth? following lifetimes > of rocking among the stars. Expecting to find a musical utopia here on > Rock's home planet? they instead land upon a quiet? fearful world? where > music has become a controlled substance and free speech is a thing of the > past. Undaunted? the Starchildren secretly walk among the Resistance? > sowing the seeds of the rebellion to come! Starchildren: Velvet Generation > provides a breakneck Rock & Roll setting and a fast? innovative card-based > system. With full rules for live performances? strange alien abilities? and > the occasional ballroom blitz? this lavishly illustrated hardcover rulebook > contains everything players need to save the world - one Rock concert at a > time!" > > > > > http://www.nobleknight.com/ViewProducts.asp_Q_ProductLineID_E_893438896_A_ManufacturerID_E_-525001013_A_CategoryID_E_12_A_GenreID_E_ > > A fun non-fiction work about the gamer subculture, and who inhabits it.... > > > "The funniest gaming book of the year! > > When Jonny Nexus (The Jonny Nexus Experience and Game Night) and James > Desborough (The Munchkin's Guide to Power Gaming) come together to cast an > irreverant eye on gamers, expect weirdness and much hilarity! > > "Sex, Dice and Gamer Chicks" casts an introspective and irreverent eye > onto gamers themselves. Just what drives a rules lawyer? What are the > secrets to fame, success and riches as an all-star games designer? Are > female gamers weird? > > These questions and many others are ignored as Sex, Dice and Gamer Chicks > tweaks and teases apart the very fabric of gaming and those who call > themselves gamers. Written by James Desborough and Jonny Nexus, both > World-class Gaming Personalities themselves, Sex, Dice and Gamer Chicks > will have you in stitches from beginning to end! > > Essential reading for gamers, gamer spouses, gamer family members and > gamer widows provided they have a sense of humor." > > > http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_2147432494_A_InventoryID_E_0_A_ProductLineID_E_2137424277_A_ManufacturerID_E_617212936_A_CategoryID_E_12_A_GenreID_E_ > > This is actually a fantasy boardgame, based on my beloved John Carter of > Mars novel series, by Edgar Rice Burroughs. I remember lovingly looking it > over when I was around twelve, and eventually I plan on buying it. It can > be played solitaire! > > "This game simulates the legendary world of Edgar Rice Burroughs, Barsoom. > Players are involved in Barsoomian epics in the role of one hero or one of > his companions, and strive to perform as nobly as those heroes of the past. > There are evil villains to be met, Martian wildlife to be tamed, great > treasures unique to Barsoom to be gained and lovely lasses to be wooed and > won. > > Players do not strive for victory points, but for greater glory, high > romance and their attendant acclaim." > > > http://www.nobleknight.com/ProductDetail.asp_Q_ProductID_E_15273_A_InventoryID_E_2148092625_A_ProductLineID_E_2137430485_A_ManufacturerID_E_37_A_CategoryID_E_12_A_GenreID_E_ > > > > > This is the mother of all super-detailed science fiction galactic empire > wargames! A game can sometimes take up to around 12 hours... > > "Twilight Imperium continues to be the board game that all other science > fiction games are measured against. A big box Fantasy Flight game, Twilight > Imperium can last a staggering 11 hours, but players will be enthralled > throughout the game. You?ll take control of an alien race with their own > special abilities, technologies and ships and must trade, negotiate, > develop and battle your way to galactic dominance." > > > http://en.wikipedia.org/wiki/Twilight_Imperium > > A major award winner? > > "Eclipse is the latest hot 4x (explore, expand, exploit, and exterminate) > science fiction board game. It has a streamlined, almost Euro-game-style > approach to the 4x genre that focuses on the explore, expand and > exterminate aspects of the genre. Great designs and graphics keep Eclipse > streamlined and fast, keeping everyone involved through the entire game." > > http://en.wikipedia.org/wiki/Eclipse_%28board_game%29 > > The decades popular classic.... > > > "Cosmic Encounter has been reprinted multiple times since its initial > release, with various publishers editing the game slightly. Through the > years, the highly popular core mechanics of racial abilities and a simple > combat system has stayed the same. Cosmic Encounter is a fast, highly > interactive game that provides players with few, but significant, decisions > as it plays out." > > http://en.wikipedia.org/wiki/Cosmic_Encounter > > I hope I am not the only one who has enjoyed this walk down memory lane. > I look forward to any replies about these games, and those not on my > list... > > > One day we should have an Extropy List Gaming Con! Lol.... > > > John : ) > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay.dugger at gmail.com Fri Feb 27 04:06:17 2015 From: jay.dugger at gmail.com (Jay Dugger) Date: Thu, 26 Feb 2015 22:06:17 -0600 Subject: [ExI] Transhumanist/sf themed tabletop games Message-ID: 2156 Thursday, 26 February 2015 Hello all: Anders might not mention this out of modesty, so I will do it for him. 1) Search the archives for some old (~1999) discussions of "Big Ideas, Grand Vision" an unpublished RPG by Dr. Sandberg 2) Go see Dr. Sandberg's pages on role-playing games at http://www.aleph.se/Nada/game.html. InfoWar I like best, and it has aged well. 3) Search the web for his work on 2300AD, Eclipse Phase, and his published Cites on the Edge supplement for Transhuman Space. Finally, go back to Noble Knight Games and look up the board games "Attack Vector: Tactical" and "High Frontier." The latter has some transhumanist elements, and both belong very much to the hard SF school of play. Once done, research the various setting books for EABA from Greg Porter. P.S.: There was also a War Against the Chtorr GURPS book. P.P.S.: That convention seems like a good idea! -- Jay Dugger (314) 766-4426 -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Feb 27 05:22:49 2015 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 26 Feb 2015 21:22:49 -0800 Subject: [ExI] Transhumanist/sf themed tabletop games In-Reply-To: References: Message-ID: I have a friend who raves about Eclipse Phase, but I've read through it and a lot of its concepts seem almost unplayable, as a consequence of its setting. Transhuman Space is more playable, though I haven't often found a group actually willing to play GURPS. Then again, most of the games I've been in recently tended to be more narrative/less crunchy. I'm running a game based around several independent Singularities happening at once (a common mildly tech-suppressive government - e.g., giving public support for those sciences that advanced humanity in little steps without empowering the masses too much - suddenly lost access to and control of the area) and running into one another, with the PCs charged with cleaning up the resulting chaos. You may know what to do when someone asks if you are a god, but what about when what's supposed to be the sworn enemy of your race tells you that you are a god and sets about proving it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From clausb at gmail.com Fri Feb 27 07:06:10 2015 From: clausb at gmail.com (Claus Bornich) Date: Fri, 27 Feb 2015 08:06:10 +0100 Subject: [ExI] Transhumanist/sf themed tabletop games Message-ID: A fine list of transhumanistically themed RPGs from John Grigg, and I'd certainly be up for a game jam sometime. I'm familiar with most of the games on your list with Eclipse Phase, Mindjammer and Transhuman Space at the top of my list, but let me add a few more. Nova Praxis (rpg.drivethrustuff.com/product/112483/Nova-Praxis-Augmented-PDF): Very interesting hard-sf transhumanistically themed RPG designed for the very cool FATE system by a guy who really understands how to use that system in Sci-Fi games. Orions Arm (http://www.orionsarm.com/): More of a collection of ideas and setting than game: "We embrace speculative ideas like Drexlerian assemblers, mind uploading, posthuman intelligence, magnetic monopoles, wormholes and the technologies, ..." Freemarket / Porject Donut is a very interesting game about post-scarcity transhuman future Utopia. This game is pretty much unobtainable unfortunately as it looks really innovative and interesting and is written by Luke Crane and Jared Sorensen which are big names in indie RPGs and so you can expect quality stuff. Diaspora (rpg.drivethrustuff.com/product/79933/Diaspora): More of a space opera than transhumanistic, but it lets the players create their universe together with some clever use of the FATE system and so a great game for playing out the future of humanity. The Void Core: SF, near future wit a bit of survival horror - this one is pretty far down my list, but as it is "pay what you want - even free" I can't complain: rpg.drivethrustuff.com/product/117563/The-Void-Core-PDF Posthuman Pathways is a bit too minimalistic and freeform for me, although with the right crowd I'm sure it could be interesting and you can get a "pay what you want - even free" PDF here: rpg.drivethrustuff.com/product/127547/Posthuman-Pathways -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 27 08:31:28 2015 From: pharos at gmail.com (BillK) Date: Fri, 27 Feb 2015 08:31:28 +0000 Subject: [ExI] Color In-Reply-To: References: Message-ID: On 27 February 2015 at 03:04, Mike Dougherty wrote: > Recent discussion of effing red made me think to share this here. > http://www.buzzfeed.com/catesish/help-am-i-going-insane-its-definitely-blue > http://www.buzzfeed.com/claudiakoerner/this-might-explain-why-that-dress-looks-blue-and-black-and-w > > What's going on with this? I assumed it is an elaborate hoax until my wife > said it reversed too. Parallel worlds like some Twilight Zone episode could > be an acceptable answer even if highly improbable. :) > Snopes says it is a mild internet prank. (Getting the twitterati all over-excited). Quote: The highlighted bit of the dress's darker top stripe reads through an eyedropper tool as hex code #806D48, which is a gold-based tone. However, a lighter portion of the sleeve (described by many as white) reads as hex code #A0A1B9, a heavily blue-tinged color. Ultimately, it's far less likely that the image is a mood diagnostic rather than a mild prank involving image manipulation software and the power of suggestion. ------------ BillK From possiblepaths2050 at gmail.com Fri Feb 27 11:27:32 2015 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 27 Feb 2015 04:27:32 -0700 Subject: [ExI] Transhumanist/sf themed tabletop games In-Reply-To: References: Message-ID: Keith, I suspected I would find Posthuman Pathways a customer or two! Jay, as familiar as I am with Ander's work, I did not know a number of the items you adroitly pointed out. And it should not surprise me that he has a White Wolf Games Mage Page, because I can easily envision him as a wizard, constantly puttering around in his inner sanctum! lol Adrian, your campaign sounds interesting! I just found out that an acquaintance loves gm'ing TS games, but he does not play it often since he said the prep time for it is very demanding. "All those AI's, embedded everywhere!" Claus, thank you for the praise and your own great list of games. I need to get every one you listed! I am very familiar with Orion's Arm, and the first time I read over the website, I felt like I was having a religious experience. I would love to see a quality rpg adapted from it! I hope more people respond to this thread, because this is a very fun discussion. John : ) P.S. Perhaps having an Extropy List Gaming Con is not such a goofy idea.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Fri Feb 27 13:17:34 2015 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 27 Feb 2015 08:17:34 -0500 Subject: [ExI] Color In-Reply-To: <54EFE549.6050701@canonizer.com> References: <54EFE549.6050701@canonizer.com> Message-ID: On Thu, Feb 26, 2015 at 10:32 PM, Brent Allsop wrote: > > In my opinion, this kind of stuff just confuses people, and leads them away > from what is important. So you would walk away from this phenomenon as "unimportant" and go back to 'effing zombie red' ? > What is important is a simple elemental redness, and a simple elemental > greyness, and the qualitative difference between them, and that fact the > zombie physics has no account for that. Seems you'd be willing to burn down the rest of the forest so everyone could appreciate your one tree. > What is that difference, when is someone else experiencing the same, or > inverted, and what are the neural correlates of that obvious difference, > i.e. how do you detect it one vs the other. This is an instance where 75% of those polled are "experiencing" a white+gold dress while the other 25% are "experiencing" blue+black. Everyday people are reposting this image and asking "Why?" A massive audience has just turned their surety of color awareness into a moment of unknown. All these people are saying "I see X, what do you see?" and they are questioning their assumption that everyone else experiences the same phenomenal reality. I assumed it was hoax. (it's not exactly a hoax) "Rick-rolling" was a popular prank that no single individual could have propagated to the frequency that it was happening, but it happened. Somebody discovered a phase transition point in color perception, then used it to create a very strong replicating meme. Even if this group has discussed it all before, this network effect of the Internet [social media(s)] to spread memes is fascinating. I hope that's not lost simply because "memes" were discussed in the early 1990's so it's already been done. So even if we don't discuss fashion chromatics, there is plenty of meta-discussion about the impact of this image on "proles" who also rarely think on the subject. *shrug* but maybe not. From painlord2k at libero.it Fri Feb 27 13:32:46 2015 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 27 Feb 2015 14:32:46 +0100 Subject: [ExI] beheadings etc In-Reply-To: References: Message-ID: <54F071FE.6090800@libero.it> Il 26/02/2015 03:40, Keith Henson ha scritto: > Think about it. Why do people have wars at all? What is the purpose? > We share the war trait with chimps, but humans are not in war mode > with everyone one else all the time like chimps are. These chimps are Physicians, Engineers, talk many languages and travel many lands. You comparing them to chimps is disingenuous. Chimps do not formally go to war, do not convert the individuals of the other tribe, do not ponder about the right way to go at war and if the reasons to go at war are good or not, right or not, permissible or not. It could be reassuring, for an highly intelligent person, to classify these people as "chimps" or "chimps like". But "chimps like" behavior is not a protracted effort lasting years, decades or centuries. It could be reassuring to think the reason to go at war are poverty, desperation or hopelessness, but this do not make it true. A chimps like behavior would last few days/weeks, at most. Would exhaust itself pretty fast and would be aimed to the neighbors, not far away people. A Saudi would not travel from England to Africa, along windings ways, to reach Libya, just guided by his monkey brain. He would not go in a self sacrificing mission alone to kill his enemies. Resource limits are, often, a poor excuse to a war. Because people are not chimps. Because wars during historic times are not waged by a chimp's minds. Who did it was selected out of the gene pool thousands years ago already. Because very high functioning sociopaths can be only on the top and also them dislike to have other sociopaths near. > It's hard to think of a section of the world with higher population > growth or poorer resources prospects than that section of the middle > east. It is no wonder they are trying to kill all the other human > groups. Poverty never caused killing sprees. If it did, there would be a massacre every day in the US and in Europe. Envy, hate, rage, gluttony are better explanations. But, as much as disagree with your conclusions about the causes, there is any solution to this problem? I suppose, if poverty is the cause, showering them with money or material goods should tranquillize and sedate them. But apparently don't. Any other suggestion? What is the solution when a horde of killing monkeys start ravaging the countryside and start showing up in the mall or in the university offices (like it did in Virginia Tech)? > To go into war mode humans have to be infested with a meme set that > dehumanizes the other group(s). IS certainly has that, but remember > that the causality runs from the environmental signal to an amplified > xenophobic meme. > Before you think I am particularly picking on the Arabs, the pre WW II > Germans were in a similar spot, and they had a similar response as > did the Cambodians and the Rwandans to similar signals. >> Basically, the IS is the Nazism of Islam. > Evolved human behavior is mechanistic. In reality, it appear that a few Nazis took ideas from Islam and then Islam (Muslim Brotherhood) took ideas from Nazism. It is a cross pollination chimps are not famous for. IS are the talking the talk and walking the walk Muslims. The majority is just talking the talk (with other Muslims) and let someone else walking the walk. Mirco From painlord2k at libero.it Fri Feb 27 14:56:38 2015 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 27 Feb 2015 15:56:38 +0100 Subject: [ExI] beheadings etc In-Reply-To: References: Message-ID: <54F085A6.9010007@libero.it> Il 27/02/2015 02:33, John Grigg ha scritto: > This discussion reminds me of the Dune series by Frank Herbert, and how > his message was to beware of religious fanaticism and messiahs. I > remember how even Paul Atreides knew that by unleashing his priests and > other fanatical warriors on the galaxy, that their would be untold > massacres and war crimes. > > I wish Frank Herbert were still around (he has been gone for around 30 > years) to tell us what he thinks of ISIS, and the over-all situation in > the Middle East. I suppose "Threat them as you would threat Harkonnen". Mirco From painlord2k at libero.it Fri Feb 27 14:58:35 2015 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 27 Feb 2015 15:58:35 +0100 Subject: [ExI] beheadings etc. In-Reply-To: <3327281444-3007@secure.ericade.net> References: <3327281444-3007@secure.ericade.net> Message-ID: <54F0861B.80906@libero.it> Il 26/02/2015 15:50, Anders Sandberg ha scritto: > You make a mistake in ascribing Boko Haram to sociopathy: sociopaths are > rare, you cannot build a large organisation from them (and they make > lousy members). Most Boko Haram members are just like you and me. That > is of course the real horror and lesson of BM, IS or the Nazis: most > members were totally ordinary people swept into a pathological culture. Any solution to propose? How we interact with people like these? ISIS Destroys Archaeological Treasures in Mosul https://www.youtube.com/watch?v=JEYX_CbwAD8 And when they have no idols to destroy, they start with people. Mirco From atymes at gmail.com Fri Feb 27 16:26:35 2015 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 27 Feb 2015 08:26:35 -0800 Subject: [ExI] Transhumanist/sf themed tabletop games In-Reply-To: References: Message-ID: On Feb 27, 2015 3:30 AM, "John Grigg" wrote: > Adrian, your campaign sounds interesting! I just found out that an acquaintance loves gm'ing TS games, but he does not play it often since he said the prep time for it is very demanding. "All those AI's, embedded everywhere!" Thanks. We're using Diaspora, as Claus linked to; it is a bit space-opera-ish, but that is to be expected since we started from (and are still technically in) the Wing Commander 'verse. The AIs are just more NPCs (or in one case a PC), when they are self-aware at all (otherwise they're just tools) - or at least, that is true of the ones this campaign has touched on - and mostly only matter so far as the PCs will interact with them (an adbot might literally only matter for one sentence in setting a scene, for example), so I don't see where they need much more prep than other such characters. (Though I admit to quite a bit of improv...but then, with my players, improv is less optional than for most campaigns I have seen.) If you want to read the campaign logs, http://blacksuncondensates.wikidot.com/ - we play most Saturdays (save when life forces a skipped week), and I try to post each session's log by the next session. -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Fri Feb 27 16:50:15 2015 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Fri, 27 Feb 2015 17:50:15 +0100 Subject: [ExI] beheadings etc. In-Reply-To: <54F0861B.80906@libero.it> References: <3327281444-3007@secure.ericade.net> <54F0861B.80906@libero.it> Message-ID: How to blame the US? Or at least the West as a whole, for this? I am sure some will come up with a solution, quickly. On Fri, Feb 27, 2015 at 3:58 PM, Mirco Romanato wrote: > Il 26/02/2015 15:50, Anders Sandberg ha scritto: > > > You make a mistake in ascribing Boko Haram to sociopathy: sociopaths are > > rare, you cannot build a large organisation from them (and they make > > lousy members). Most Boko Haram members are just like you and me. That > > is of course the real horror and lesson of BM, IS or the Nazis: most > > members were totally ordinary people swept into a pathological culture. > > Any solution to propose? > > How we interact with people like these? > > ISIS Destroys Archaeological Treasures in Mosul > https://www.youtube.com/watch?v=JEYX_CbwAD8 > > And when they have no idols to destroy, they start with people. > > Mirco > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- https://protokol2020.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Feb 27 16:48:28 2015 From: spike66 at att.net (spike) Date: Fri, 27 Feb 2015 08:48:28 -0800 Subject: [ExI] beheadings etc. In-Reply-To: <54F0861B.80906@libero.it> References: <3327281444-3007@secure.ericade.net> <54F0861B.80906@libero.it> Message-ID: <011c01d052ad$36eb3b30$a4c1b190$@att.net> -----Original Message----- From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mirco Romanato Sent: Friday, February 27, 2015 6:59 AM To: ExI chat list Subject: Re: [ExI] beheadings etc. Il 26/02/2015 15:50, Anders Sandberg ha scritto: >> ... most members were totally ordinary people swept into a pathological culture. >...Any solution to propose? >...How we interact with people like these? >...ISIS Destroys Archaeological Treasures in Mosul https://www.youtube.com/watch?v=JEYX_CbwAD8 >...And when they have no idols to destroy, they start with people. >...Mirco _______________________________________________ Mirco, on the contrary sir. The museums have guards. These must be defeated, slain or scattered. So they start with people and when they have no people to destroy, they start with idols. But let's look at it another way, shall we? The middle east has some of the most ancient antiquities known. The archaeological treasures of Mosul attracted tourists from all over the world. The tourists were in danger from being kidnapped and held for ransom, then slain. Result: the tourists will not come to Mosul because the antiquities have been smashed to shards and hurled into the landfill, history books are rewritten to start in the 7th century, with anything previous being irrelevant, being either redundant or heresy, tourists go elsewhere, lives are saved. spike From anders at aleph.se Fri Feb 27 17:09:27 2015 From: anders at aleph.se (Anders Sandberg) Date: Fri, 27 Feb 2015 18:09:27 +0100 Subject: [ExI] Transhumanist/sf themed tabletop games In-Reply-To: Message-ID: <3421980055-445@secure.ericade.net> Adrian Tymes , 27/2/2015 6:24 AM: I have a friend who raves about Eclipse Phase, but I've read through it and a lot of its concepts seem almost unplayable, as a consequence of its setting. Rave, rave, rave! Me and my groups found it totally playable. Yes, you need to gloss over things to some extent to keep it playable, but we had a lot of interesting adventures in the setting. http://www.aleph.se/EclipsePhase/ I have earlier mentioned the essay on law and order on the Extropia habitat on this list, for some reason. I'm running a game based around several independent Singularities happening at once (a common mildly tech-suppressive government - e.g., giving public support for those sciences that advanced humanity in little steps without empowering the masses too much - suddenly lost access to and control of the area) and running into one another, with the PCs charged with cleaning up the resulting chaos.? You may know what to do when someone asks if you are a god, but what about when what's supposed to be the sworn enemy of your race tells you that you are a god and sets about proving it? Sounds good! My current campaign setting is relatively low-tech (not to far from classical cyberpunk), but there are signs that there are both nanomachines, AI, megastructures and *really* strange things (million kilometre spaceships, anyone?) in the bigger universe - which makes it extra suspicious that humanity has failed at building an AI smarter than IQ 60 for so long... let's just say that the truth gives the word "posthuman" an entirely new twist.? An ExtroCon would be fun.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Fri Feb 27 17:13:13 2015 From: anders at aleph.se (Anders Sandberg) Date: Fri, 27 Feb 2015 18:13:13 +0100 Subject: [ExI] Transhumanist/sf themed tabletop games In-Reply-To: Message-ID: <3422868053-5438@secure.ericade.net> John Grigg??, 27/2/2015 12:31 PM: Jay, as familiar as I am with Ander's work, I did not know a number of the items you adroitly pointed out.? And it should not surprise me that he has a White Wolf Games Mage Page, because I can easily envision him as a wizard, constantly puttering around in his inner sanctum! lol Well, I am a researcher at Hogwarts after all :-) ?(Which reminds me, I ought to get myself a proper Oxford gown to show off my wizardness to my brother's kids) Mage was great, especially when it dared to allow truly weird possibilities in the earlier editions.? However, transhumanism can show up in odd environments. After all, it is a kind of mindset. I was in a fantasy campaign where the heroes were essentially building a transhumanist fantasy kingdom - magic misused in the right way allow a lot of cool stuff (the most bizarre was the skeleton-powered factories programmed in COBOL).? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Fri Feb 27 17:23:54 2015 From: anders at aleph.se (Anders Sandberg) Date: Fri, 27 Feb 2015 18:23:54 +0100 Subject: [ExI] beheadings etc. In-Reply-To: <54F0861B.80906@libero.it> Message-ID: <3423166789-5438@secure.ericade.net> Mirco Romanato??, 27/2/2015 4:01 PM: Il 26/02/2015 15:50, Anders Sandberg ha scritto:? > You make a mistake in ascribing Boko Haram to sociopathy: sociopaths are? > rare, you cannot build a large organisation from them (and they make? > lousy members). Most Boko Haram members are just like you and me. That? > is of course the real horror and lesson of BM, IS or the Nazis: most? > members were totally ordinary people swept into a pathological culture.? Any solution to propose?? How we interact with people like these?? How do you interact with any fanatical people? Figure out what makes them tick and then use it.? I am no expert, but BH is very much a response to Nigeria's rich south ignoring the poor north: it could have turned into normal guerilla/rebellion, but now it took on religious angle since the north-south divide is also a religious divide. Rather than adding anti-western sentiments in the socialist mode, it settled on a religious mode (hence the name - to BH education is a weapon of cultural assimilation used by the south and the West). Maybe there is also Salafist foreign aid, I don't know. So a first issue that needs to be addressed is what the Nigerian government is doing with the oil money - it might be necessary to start squeezing them on corruption and the need to include the whole country. This won't stop BH directly, but handled well it might deprive it of the core driver, and make more people willing to resist. Most likely the only proper solution is for Nigeria to actually get its governance act together and construct a working police and military.? The thing is, when members of groups like this are interviewed they turn out to be fairly nonideological: a few zealots here and there, but mostly a lot of people who recognize a good thing for themselves when they see it, or are scared not to join in. The demonisation that happens in the standard media depictions is partially deliberate, but partially just because the real story is depressingly banal. Some people really did join he Nazi party for the dental plan.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Feb 27 18:54:25 2015 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 27 Feb 2015 10:54:25 -0800 Subject: [ExI] Transhumanist/sf themed tabletop games In-Reply-To: <3421980055-445@secure.ericade.net> References: <3421980055-445@secure.ericade.net> Message-ID: On Feb 27, 2015 9:11 AM, "Anders Sandberg" wrote: > Adrian Tymes , 27/2/2015 6:24 AM: >> I have a friend who raves about Eclipse Phase, but I've read through it and a lot of its concepts seem almost unplayable, as a consequence of its setting. > > Rave, rave, rave! Me and my groups found it totally playable. Yes, you need to gloss over things to some extent to keep it playable, but we had a lot of interesting adventures in the setting. > http://www.aleph.se/EclipsePhase/ > I have earlier mentioned the essay on law and order on the Extropia habitat on this list, for some reason. Glossing over some parts would do it. Any mechanics can be Rule Zeroed. ;) Mind if I put said friend in touch with you? I think he might have mentioned that site before, as a useful resource. > which makes it extra suspicious that humanity has failed at building an AI smarter than IQ 60 for so long... That's similar to a twist my players are discovering - except it's an alien civilization that got stuck (to degrees that they got religious about it), so they kickstarted another alien civ to see if they could solve it, then fast-forwarded their entire civilization using near-c travel to skip through the eons this second civ needed to become starfaring. Just as they were ending it, humanity and its allies came along and messed things up in glorious fashion. (And then the PCs mess things up further personally, which resonates with said religion. Thus, "What do you do when someone tells you that you are a god, and uses your actions as evidence?") -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Fri Feb 27 19:46:56 2015 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 27 Feb 2015 20:46:56 +0100 Subject: [ExI] beheadings etc. In-Reply-To: <3423166789-5438@secure.ericade.net> References: <3423166789-5438@secure.ericade.net> Message-ID: <54F0C9B0.80800@libero.it> Il 27/02/2015 18:23, Anders Sandberg ha scritto: > The thing is, when members of groups like this are interviewed they turn > out to be fairly nonideological: a few zealots here and there, but > mostly a lot of people who recognize a good thing for themselves when > they see it, or are scared not to join in. The demonisation that happens > in the standard media depictions is partially deliberate, but partially > just because the real story is depressingly banal. Some people really > did join he Nazi party for the dental plan. And did they defeated the Nazis with a better dental plan? (1) If someone just joined the Nazis because of the dental plan or something lie it, they did deserved to be bombed out without remorse. And if they did one time, they will do it another time, without remorse. In fact, I believe it is worse to do it for non ideological reasons. At least the people joining and fighting for ideological reasons can change their mind. The people doing it for not ideological reasons have nothing to change. Mirco (1) I use "they" and not "we" because Italy and Sweden have a not a good record on stopping Nazism. From spike66 at att.net Fri Feb 27 23:34:21 2015 From: spike66 at att.net (spike) Date: Fri, 27 Feb 2015 15:34:21 -0800 Subject: [ExI] no one will ever believe Message-ID: <016101d052e5$ea944b80$bfbce280$@att.net> "No one will ever believe that both your hard drive and mine crashed within a week." (Comment on Lois Lerner's recovered destroyed email.) http://joemiller.us/2015/02/smoking-gun-lois-lerner-email-found-discussed-fa king-hard-drive-crashes-criminal-probe-opened/ OK then let us test that theory. Does anyone here believe that two critical hard drives containing email such as the one above crashed within a week of each other? I thought not. The internet never forgets. Good chance I will be going in for an IRS audit soon, for having expressed my candid doubt. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Feb 28 00:05:05 2015 From: pharos at gmail.com (BillK) Date: Sat, 28 Feb 2015 00:05:05 +0000 Subject: [ExI] no one will ever believe In-Reply-To: <016101d052e5$ea944b80$bfbce280$@att.net> References: <016101d052e5$ea944b80$bfbce280$@att.net> Message-ID: On 27 February 2015 at 23:34, spike wrote: > "No one will ever believe that both your hard drive and mine crashed within > a week." > (Comment on Lois Lerner's recovered destroyed email.) > > http://joemiller.us/2015/02/smoking-gun-lois-lerner-email-found-discussed-faking-hard-drive-crashes-criminal-probe-opened/ > > OK then let us test that theory. Does anyone here believe that two critical > hard drives containing email such as the one above crashed within a week of > each other? > > I thought not. > > The internet never forgets. Good chance I will be going in for an IRS audit > soon, for having expressed my candid doubt. > It is what every IT tech said from the very beginning. It doesn't matter whether two hard drives miraculously crashed. Every large company has mail backup tapes. Guess what - they just found them in the tape library! BillK From danust2012 at gmail.com Sat Feb 28 00:16:35 2015 From: danust2012 at gmail.com (Dan) Date: Fri, 27 Feb 2015 16:16:35 -0800 Subject: [ExI] no one will ever believe In-Reply-To: <016101d052e5$ea944b80$bfbce280$@att.net> References: <016101d052e5$ea944b80$bfbce280$@att.net> Message-ID: > On Feb 27, 2015, at 3:34 PM, "spike" wrote: > > ?No one will ever believe that both your hard drive and mine crashed within a week.? > > (Comment on Lois Lerner?s recovered destroyed email.) > > http://joemiller.us/2015/02/smoking-gun-lois-lerner-email-found-discussed-faking-hard-drive-crashes-criminal-probe-opened/ > > > OK then let us test that theory. Does anyone here believe that two critical hard drives containing email such as the one above crashed within a week of each other? > > I thought not. > > The internet never forgets. Good chance I will be going in for an IRS audit soon, for having expressed my candid doubt. > > spike I can think of many instances where something like this has happened when the data might lead to a bad outcome for the party whose data was "disappeared." Regards, Dan Sample my latest Kindle book at: http://www.amazon.com/Fruiting-Bodies-Nanovirus-Book-2-ebook/dp/B00U1UCN9A/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Feb 28 00:12:38 2015 From: spike66 at att.net (spike) Date: Fri, 27 Feb 2015 16:12:38 -0800 Subject: [ExI] no one will ever believe In-Reply-To: References: <016101d052e5$ea944b80$bfbce280$@att.net> Message-ID: <005a01d052eb$44000ab0$cc002010$@att.net> -----Original Message----- From: extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK Sent: Friday, February 27, 2015 4:05 PM To: ExI chat list Subject: Re: [ExI] no one will ever believe On 27 February 2015 at 23:34, spike wrote: >>>... "No one will ever believe that both your hard drive and mine crashed within a week." (Comment on Lois Lerner's recovered destroyed email.)... >>... OK then let us test that theory. Does anyone here believe that two > critical hard drives containing email such as the one above crashed > within a week of each other? > >...It is what every IT tech said from the very beginning. It doesn't matter whether two hard drives miraculously crashed. Every large company has mail backup tapes. Guess what - they just found them in the tape library! BillK _______________________________________________ Well there you go. They hid the backup tapes in the tape library, under the date which the backup was made, the sneaky bastards. This shows that IRS Director Lerner does tell the truth sometimes. She said no one would believe it was a simultaneous disc crash, and no one does. So at least under some circumstances, such as while in the act of committing criminal conspiracy, she does tell the truth. spike From anders at aleph.se Sat Feb 28 11:35:40 2015 From: anders at aleph.se (Anders Sandberg) Date: Sat, 28 Feb 2015 12:35:40 +0100 Subject: [ExI] Breakout success for reinforcement learning Message-ID: <3488764102-11415@secure.ericade.net> Google DeepMind's paper on reinforcement learning playing Atari games is now out in Nature: Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. http://googleresearch.blogspot.co.uk/2015/02/from-pixels-to-actions-human-level.html http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html https://www.youtube.com/watch?v=iqXKQf2BOSE http://arxiv.org/abs/1312.5602 The nature version has a really interesting plot of relative performance on different games. It looks like the system is amazing at games where the current state is all you need to deal with, while it is less successful at games where you need to find objects and use them in the right location at the state of the game. Not too surprising (reinforcement learning is closely linked to Markov chains) but nevertheless a good indication of where the next rewards in research are likely to lie.? Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University -------------- next part -------------- An HTML attachment was scrubbed... URL: From alito at organicrobot.com Sat Feb 28 13:08:07 2015 From: alito at organicrobot.com (Alejandro Dubrovsky) Date: Sun, 01 Mar 2015 00:08:07 +1100 Subject: [ExI] Breakout success for reinforcement learning In-Reply-To: <3488764102-11415@secure.ericade.net> References: <3488764102-11415@secure.ericade.net> Message-ID: <54F1BDB7.2000008@organicrobot.com> On 28/02/15 22:35, Anders Sandberg wrote: > Google DeepMind's paper on reinforcement learning playing Atari games is > now out in Nature: Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., > Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari > with deep reinforcement learning. arXiv preprint arXiv:1312.5602. > > http://googleresearch.blogspot.co.uk/2015/02/from-pixels-to-actions-human-level.html > http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html > https://www.youtube.com/watch?v=iqXKQf2BOSE > http://arxiv.org/abs/1312.5602 > Yep, DeepMind are killing it at the moment. They've also released their source code with their Nature paper. You can get it from here: https://sites.google.com/a/deepmind.com/dqn/ There's also code independently developed by Nathan Sprague based on the 2013 version of the paper here: https://github.com/spragunr/deep_q_rl. The 2013 version used a narrower and shallower network, and it also didn't use their new two-network system that they adapted in the Nature version to avoid oscillatory behaviour. At the moment the only advantage of that code would be the more liberal licencing since DeepMind's code can only be used for research purposes. It shouldn't take long for it to catch up with the newer version, and I suspect it will become the go-to version because of the licencing advantage. BTW, if you are going to play with the code, make sure to get the beastliest Nvidia GPU that you can afford. From brent.allsop at canonizer.com Sat Feb 28 21:18:15 2015 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 28 Feb 2015 14:18:15 -0700 Subject: [ExI] Color In-Reply-To: References: <54EFE549.6050701@canonizer.com> Message-ID: <54F23097.5090905@canonizer.com> Hi Mike, Very good point, I guess I was going way to far and burning down way to many very important trees. Thanks for point out the obvious to me, this is very valuable information. The naive popular view, is that color is a property of the initial causes of the perception process, and at least one critically important part of these currently popular meme, is that this is educating people about the fallacy in that view. Diversity of qualia or Inverted qualia is another important concept this illustrates, although in the strictest sense, this is a very complicated inverted qualia example. All I was trying to say, was if you want to bridge the "Explanitory Gap", and point out that there is no "hard problem" the easiest way to do it, is to avoid things like this and to instead, focus on the two simplest and most obviously qualitatively different qualities, like simple redness and greeness. And then show how you can detect and see those and their differences. You need to show how you can see these colors in improperly interpreted 'grey matter' of the brain.(Note "grey" is a miss interpretation of the true color.) Once you bridge this explanatory gap with such an obvious simple example, all other qualitative so called "problems" of what other brain are "like" can be shown to be more complex variations on that simple and obvious qualitative theory of consciousness. Brent On 2/27/2015 6:17 AM, Mike Dougherty wrote: > On Thu, Feb 26, 2015 at 10:32 PM, Brent Allsop > wrote: >> In my opinion, this kind of stuff just confuses people, and leads them away >> from what is important. > So you would walk away from this phenomenon as "unimportant" and go > back to 'effing zombie red' ? > >> What is important is a simple elemental redness, and a simple elemental >> greyness, and the qualitative difference between them, and that fact the >> zombie physics has no account for that. > Seems you'd be willing to burn down the rest of the forest so everyone > could appreciate your one tree. > >> What is that difference, when is someone else experiencing the same, or >> inverted, and what are the neural correlates of that obvious difference, >> i.e. how do you detect it one vs the other. > This is an instance where 75% of those polled are "experiencing" a > white+gold dress while the other 25% are "experiencing" blue+black. > Everyday people are reposting this image and asking "Why?" A massive > audience has just turned their surety of color awareness into a moment > of unknown. All these people are saying "I see X, what do you see?" > and they are questioning their assumption that everyone else > experiences the same phenomenal reality. > > I assumed it was hoax. (it's not exactly a hoax) "Rick-rolling" was a > popular prank that no single individual could have propagated to the > frequency that it was happening, but it happened. Somebody discovered > a phase transition point in color perception, then used it to create a > very strong replicating meme. Even if this group has discussed it all > before, this network effect of the Internet [social media(s)] to > spread memes is fascinating. I hope that's not lost simply because > "memes" were discussed in the early 1990's so it's already been done. > > So even if we don't discuss fashion chromatics, there is plenty of > meta-discussion about the impact of this image on "proles" who also > rarely think on the subject. > > *shrug* but maybe not. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hrivera at alumni.virginia.edu Sat Feb 28 18:16:59 2015 From: hrivera at alumni.virginia.edu (Henry Rivera) Date: Sat, 28 Feb 2015 13:16:59 -0500 Subject: [ExI] Breakout success for reinforcement learning In-Reply-To: <3488764102-11415@secure.ericade.net> References: <3488764102-11415@secure.ericade.net> Message-ID: On Feb 28, 2015, at 6:35 AM, Anders Sandberg wrote: > > Google DeepMind's paper on reinforcement learning playing Atari games is now out in Nature: Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. > > http://googleresearch.blogspot.co.uk/2015/02/from-pixels-to-actions-human-level.html > http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html > https://www.youtube.com/watch?v=iqXKQf2BOSE > http://arxiv.org/abs/1312.5602 > Video interview with the author: http://www.pbs.org/newshour/rundown/artificial-intelligence-program-teaches-play-atari-games-can-beat-high-score/