From sjatkins at mac.com Sat Jan 1 00:12:02 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 31 Dec 2010 16:12:02 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: References: Message-ID: On Dec 31, 2010, at 9:48 AM, Keith Henson wrote: > On Fri, Dec 31, 2010 at 12:32 AM, Samantha Atkins wrote: > >> On Dec 30, 2010, at 3:18 PM, Keith Henson wrote: >>> >>> And 20 years ago, that was the right conclusion. Now we have a path, >>> even if it kind of expensive, to 9+km/sec exhaust velocity. That >>> means mass ratio 3 rockets to LEO and even better LEO to GEO. >> >> I must have missed it. Please give details, links, etc. How expensive? How large a payload? What technologies? > > Context is SBSP, 200 GW of new power per year, one million tons of > parts going up per year. That's about 125 tons per hour delivered to > GEO. > > The SSTO vehicle is an evolution of the Skylon design > http://www.astronautix.com/lvs/skylon.htm swapping lox for payload and > a sapphire window between the engines with 10-20 bar hydrogen and a > deep channel heat absorber behind it. The flow of cold hydrogen keeps > the window and the front surface of the heat absorber cool. The > absorber is described here: > http://www.freepatentsonline.com/4033118.pdf > A Skylon only delivers about 12 tons per trip to LEO. They were designed for no less than 200 launch lifetimes. And they were designed for two launch windows per day equatorial. I don't see how you get from that to 125 tons / hr to LEO, much less GEO. > One part is fixed by physics and the Earth's gravity field. The > minimum horizontal boost acceleration after getting out of the > atmosphere with substantial vertical velocity has to be slightly more > than a g to achieve orbit before running into the atmosphere. You > want to use the minimum acceleration you can at the highest exhaust > velocity you have energy for. This keeps down the laser power, which > is huge anyway. > > This takes 15-20 minutes and only in the last third do you get up to > the full 3000 deg K and 9.8 km/sec. The average (for this size > vehicle and 6 GW) is 8.5 km/sec, but the first 2 km/sec in air > breathing mode has an equivalent exhaust velocity of 10.5 km/sec. So > about 1/3 of takeoff mass (300 tons) gets to orbit. The vehicle mass > is about 50 tons leaving 50 tons for the LEO to GEO stage. That is a much much larger craft than a Skylon. Do you have links for it? Note that an Ares V launch (super heavy launcher) can only put about 63 tons into GEO. So this craft capability seems like serious magic to me. > > So the payload at GEO per load needs to be 1/4 to 1/3 of 125 tons. > Again using laser heated hydrogen 35 tons of a 50 ton second stage > will get there. With some care in the design, it can all be used for > power satellite construction. > > The long acceleration means the lasers must track the vehicle over a > substantial fraction of the circumference of the earth. Wait, you are using lasers to provide thrust to this big honking lift vehicle? I presume you are aware we have only tested this for very very small vehicles and never to high altitudes. This is in no way near term tech for a vehicle of this size. Or do you intend to user laser propulsion only for the LEO to GEO phase? Using the standard 1 MW/kg gives 300 GW for a 300 ton vehicle, 50 GW for a 50 ton vehicle. Lasers are generally 10% power efficient so 10x the output power is needed to drive them. What is the joke? > Based on > Jordin Kare's work, this takes a flotilla of mirrors in GEO. Current > space technology is good enough to keep the pointing error down to .7 > meters at that distance while tracking the vehicle. The lasers don't > need to be on the equator so they can be placed where there is grid > power. They need to be 30-40 deg to the east of the lunch point. > Uh huh. What is the max distance you are speaking of? > There are (I think) only four locations where there is an equatorial > launch site with thousands of km of water to the east. The US has one > set, China has a better one. > > The lasers are the big ticket item. At $10/watt, $60 B. That is the cost for 6 GW. > The rest, > vehicles, mirrors, ground infrastructure, R&D, etc might bring it up > to $100 B--which is a fraction of the expected profits per year from > selling that many power satellites. > > I don't expect it to be done by the US. China, maybe. It don't expect it to be done by anyone in this manner from the above description. I don't see how to make such a vehicle or operate that kind of laser propulsion system at such a scale. - samantha From sjatkins at mac.com Sat Jan 1 00:13:06 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 31 Dec 2010 16:13:06 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: <4D1BD7D2.5030403@aleph.se> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <4D1BD7D2.5030403@aleph.se> Message-ID: <8CA05405-136F-4264-9DB6-9F2F30FE75C0@mac.com> On Dec 29, 2010, at 4:52 PM, Anders Sandberg wrote: > On 2010-12-29 06:27, John Clark wrote: >> On Dec 28, 2010, at 11:50 PM, The Avantguardian wrote: >> >>> If machine-phaselife is so inevitable and sosuperior, where are the >>> Von Neumann probes? >> >> Two possible answers: >> >> 1) Somebody has to be the first intelligent technological civilization >> in the visible universe, perhaps it is us. > > This weird possibility should be considered. > >> 2) Some road block prevents intelligence from engineering the cosmos, my >> best guess of what that impediment is would be electronic drug addiction. > > That possibility has jumped a bit in my estimation this year, but I still find it somewhat problematic. The main reason is that it requires electronic drug addiction (or games, superb art, sex or whatever) to be strongly convergent: nobody and nothing can resist it. That seems to be a tall order, since it is enough that only one individual manufactures a successful von Neumann anywhere for them to dominate. > http://www.aleph.se/andart/archives/2010/04/flanders_vs_fermi.html Games: hid the game machines in the closet. Superb art: I don't find that much I really consider superb. Sex: see art. :) I have a feeling that actual species that make it to and through an AI singularity aren't so intent on "dominating" as we relative primitives are. - s From sjatkins at mac.com Sat Jan 1 00:19:43 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 31 Dec 2010 16:19:43 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: <001a01cba914$34363940$9ca2abc0$@att.net> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <001a01cba914$34363940$9ca2abc0$@att.net> Message-ID: <6C259209-0A68-446C-ADDF-2DB683C1AC9D@mac.com> On Dec 31, 2010, at 9:57 AM, spike wrote: > > On Dec 31, 2010, at 3:00 AM, Eugen Leitl wrote: > >> On Thu, Dec 30, 2010 at 11:04:54PM -0800, Samantha Atkins wrote: >> >>> Well now we don't have the heavy lifters we had before. >> >> We're only trying to deliver a number of small parcels. Each smaller > than the one before. > >> Not if you want to do any space or lunar processing and manufacturing or > house any human crews at all.- samantha > > Ja, the two schools of thought are talking past each other. The presence of > meat-form humans creates a long list of bottlenecks which will remain as > persistent as herpes. Humans in space are too preoccupied with mundane > tasks such as trying to stay alive and get back home. Real space progress > is in assuming away the chimps and going towards making the machines > smarter, smaller, lighter, more durable and more capable. Until human level AGI (about 3 decades out seems to be current consensus), humans are needed. Given that we need space based resources before three decades from now we must build out human support local space/lunar infrastructure. You need a lot of high mass initial equipment to lift from the gravity well in any any case to have a basis to built from this side of mature nano-assembler seeds which are at least 5 - 6 decades out. It is a good question what the minimal amount of lift needed is given the current tech state of the art over time. The amount of mass you need to lift from earth in inversely proportional to the sophistication of the technology. But it is today quite substantial. - samantha From amara at kurzweilai.net Sat Jan 1 00:22:35 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Fri, 31 Dec 2010 16:22:35 -0800 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <24467B7D-A277-4E1A-87DF-9981AB535CDF@bellsouth.net> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <24467B7D-A277-4E1A-87DF-9981AB535CDF@bellsouth.net> Message-ID: <028401cba949$fdabe9c0$f903bd40$@net> Interesting discussion. Here are a few studies (below) that might suggest how experiences of a copy might someday be processed so as to be perceived as "real." I wonder if after some period of time, the original person's brain would rewire? That might mean that the original person might perceive his/her own body as foreign. Hmmm.... could be a problem. :) On a related note, is there any evidence that long-time users of Second Life experience such dissociation? And is it possible or likely that as simulation technology improves and becomes widely available, mass dissociation or psychosis might occur? The effect would be increased by using high-res VR with full immersion and at least 180 degrees to avoid peripheral vision artifacts (humans have about 200 degrees vision) and ultra-high-resolution such as http://www.sensics.com/products/AugmentedReality.php (4200x2400 pixels). Also see http://cb.nowan.net/blog/state-of-vr/state-of-vr-displays/. ---------------------------------------------- Brain scans of avid players of the hugely popular online fantasy world World of Warcraft reveal that areas of the brain involved in self-reflection and judgment seem to behave similarly when someone is thinking about their virtual self as when they think about their real one. http://www.newscientist.com/article/dn18117-how-your-brain-sees-virtual-you. html Using virtual-reality goggles, a camera and a stick, scientists have induced out-of-body experiences - the sensation of drifting outside of one's own body http://www.nytimes.com/2007/08/24/science/24body.html?_r=1 &ref=science http://graphics8.nytimes.com/images/2007/08/23/science/body190.jpg -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 5530 bytes Desc: not available URL: From sjatkins at mac.com Sat Jan 1 01:46:08 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 31 Dec 2010 17:46:08 -0800 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <028401cba949$fdabe9c0$f903bd40$@net> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <24467B7D-A277-4E1A-87DF-9981AB535CDF@bellsouth.net> <028401cba949$fdabe9c0$f903bd40$@net> Message-ID: On Dec 31, 2010, at 4:22 PM, Amara D. Angelica wrote: > Interesting discussion. Here are a few studies (below) that might suggest how experiences of a copy might someday be processed so as to be perceived as "real." I wonder if after some period of time, the original person's brain would rewire? That might mean that the original person might perceive his/her own body as foreign. Hmmm.... could be a problem. :) > > On a related note, is there any evidence that long-time users of Second Life experience such dissociation? And is it possible or likely that as simulation technology improves and becomes widely available, mass dissociation or psychosis might occur? The effect would be increased by using high-res VR with full immersion and at least 180 degrees to avoid peripheral vision artifacts (humans have about 200 degrees vision) and ultra-high-resolution such as http://www.sensics.com/products/AugmentedReality.php (4200x2400 pixels). Also see http://cb.nowan.net/blog/state-of-vr/state-of-vr-displays/. This is a pretty well known phenomenon in SL. Some describe it as what make a true digital person - the experiencing of avatar as self and even physical self as alternate embodiment of avatar. I used to get strange effects like the physical world looking more unreal to me than the virtual world. But that seems to have been a temporary adjustment period when I was spending much more time in SL. Many report distinct person phenomenon of two persons, one virtual, sharing the same brain. Personality creation, living within the creation, is something all humans do growing up (or more ofter in some cases). It is not surprising that we sometimes spawn off new "selves" in virtual worlds as they improve. It is experienced as much more than 'mere' make-believe. - samantha > > > ---------------------------------------------- > > Brain scans of avid players of the hugely popular online fantasy world World of Warcraft reveal that areas of the brain involved in self-reflection and judgment seem to behave similarly when someone is thinking about their virtual self as when they think about their real one. > http://www.newscientist.com/article/dn18117-how-your-brain-sees-virtual-you.html > > Using virtual-reality goggles, a camera and a stick, scientists have induced out-of-body experiences ? the sensation of drifting outside of one?s own body > http://www.nytimes.com/2007/08/24/science/24body.html?_r=1&ref=science > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Sat Jan 1 04:39:11 2011 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 31 Dec 2010 23:39:11 -0500 Subject: [ExI] Whatever happened to morphological freedom? Message-ID: <4D1EAFEF.80603@speakeasy.net> One of my favorite ideas in transhumanism is morphological freedom. What happened to it? >From the sound of it, people are ecstatic over the prospect of all human choice being obliterated in favor of computronium. ********************************** Being able to chose the skin color of your avatar in VR is NOT morphological freedom. ********************************** So what ever happened to the idea and where can I find the people who still support it? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From amara at kurzweilai.net Sat Jan 1 05:35:49 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Fri, 31 Dec 2010 21:35:49 -0800 Subject: [ExI] Second-Life dissociation/simulation as an improvement over reality. In-Reply-To: References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <24467B7D-A277-4E1A-87DF-9981AB535CDF@bellsouth.net> <028401cba949$fdabe9c0$f903bd40$@net> Message-ID: <035c01cba975$bfc297e0$3f47c7a0$@net> Samantha, Wow! I'd like to interview a few extreme SL denizens who have experienced this? Any tips on how to reach them? - Amara AA: On a related note, is there any evidence that long-time users of Second Life experience such dissociation? And is it possible or likely that as simulation technology improves and becomes widely available, mass dissociation or psychosis might occur? The effect would be increased by using high-res VR with full immersion and at least 180 degrees to avoid peripheral vision artifacts (humans have about 200 degrees vision) and ultra-high-resolution such as http://www.sensics.com/products/AugmentedReality.php (4200x2400 pixels). Also see http://cb.nowan.net/blog/state-of-vr/state-of-vr-displays/. SA: This is a pretty well known phenomenon in SL. Some describe it as what make a true digital person - the experiencing of avatar as self and even physical self as alternate embodiment of avatar. I used to get strange effects like the physical world looking more unreal to me than the virtual world. But that seems to have been a temporary adjustment period when I was spending much more time in SL. Many report distinct person phenomenon of two persons, one virtual, sharing the same brain. Personality creation, living within the creation, is something all humans do growing up (or more ofter in some cases). It is not surprising that we sometimes spawn off new "selves" in virtual worlds as they improve. It is experienced as much more than 'mere' make-believe. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 1 05:53:05 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 1 Jan 2011 00:53:05 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <135888.23656.qm@web65615.mail.ac4.yahoo.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> Message-ID: On Dec 31, 2010, at 5:28 PM, The Avantguardian wrote: > The soul? Is that what you think this is about? Yes, that is exactly what I think this is about. You say the copy is perfect but it is nevertheless missing something; leaving aside the obvious illogic of such a thing, what exactly is this secret sauce that the original has that the copy does not? You say it is not information, and you'd better say it is not atoms or you will end up inundated in absurdities, so this mysterious ingredient must be something else entirely and it is of enormous importance too, but for some unknown reason it cannot be explained or even detected by the scientific method. There is already a word in the English language for something like that, but I can't really blame you, I'd feel pretty foolish using The Word That Must Not Be Named too. > I am not talking about metaphysics here. LIKE HELL YOU'RE NOT! And its not even good metaphysics. > I am an event that has a set of very physical space-time coordinates. Congratulations, you have discovered that (some) things happen at a particular place at some time; but of course adjectives like you and me do not. > Can you copy my space-time coordinates? Talking about space-time coordinates does sound much more scientific than mundane time and place, even if it means the same thing and brings nothing new to the conversation. And if that is the secret of identity it leads to some peculiar conclusions, you become a completely unrelated person from one second to the next, or when you move from one place to another, a totally different person who's continued consciousness is of absolutely no interest to you, other than that of empathy. And yet despite it all somehow I seem to continue, how odd. > If my copy does not occupy my position in space and time, it is not me. Even if that were true, and I have no reason to think it is, how do you even know what position you or your copy are in? If you exchange the position of you and a identical copy of you in a symmetrical room neither you nor the copy will notice the slightest difference, an outside observer will notice no difference either. The very universe itself will not notice that any exchange has occurred. Objectively it makes no difference and subjectively it makes no difference. If the difference is not objective and the difference is not subjective then that rather narrows down your options in pointing out just where that difference is. And for God's sake stop sending me copies of your posts, send me the originals! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Jan 1 10:40:02 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 1 Jan 2011 03:40:02 -0700 Subject: [ExI] Spacecraft (was MM) Message-ID: On Fri, Dec 31, 2010 at 11:07 PM, Samantha Atkins wrote: > > On Dec 31, 2010, at 9:48 AM, Keith Henson wrote: > >> On Fri, Dec 31, 2010 at 12:32 AM, ?Samantha Atkins wrote: >> >>> On Dec 30, 2010, at 3:18 PM, Keith Henson wrote: >>>> >>>> And 20 years ago, that was the right conclusion. ?Now we have a path, >>>> even if it kind of expensive, to 9+km/sec exhaust velocity. ?That >>>> means mass ratio 3 rockets to LEO and even better LEO to GEO. >>> >>> I must have missed it. ?Please give details, links, etc. ?How expensive? ?How large a payload? What technologies? >> >> Context is SBSP, 200 GW of new power per year, one million tons of >> parts going up per year. ?That's about 125 tons per hour delivered to >> GEO. > > >> >> The SSTO vehicle is an evolution of the Skylon design >> http://www.astronautix.com/lvs/skylon.htm swapping lox for payload and >> a sapphire window between the engines with 10-20 bar hydrogen and a >> deep channel heat absorber behind it. ?The flow of cold hydrogen keeps >> the window and the front surface of the heat absorber cool. ?The >> absorber is described here: >> http://www.freepatentsonline.com/4033118.pdf >> > > A Skylon only delivers about 12 tons per trip to LEO. ?They were designed for no less than 200 launch lifetimes. ?And they were designed for two launch windows per day equatorial. ? ? I don't see how you get from that to 125 tons / hr to LEO, much less GEO. I said it was evolved from Skylon. Slight upgrade from 275 t0 300 tons takeoff. That is incidentally, less than the smallest 747. And the launch window to a fixed place at GEO from a fixed place on the earth is *always* open. There would be a takeoff and a landing every 15-20 minutes. That's trivial compared to LAX or SFO. > >> One part is fixed by physics and the Earth's gravity field. ?The >> minimum horizontal boost acceleration after getting out of the >> atmosphere with substantial vertical velocity has to be slightly more >> than a g to achieve orbit before running into the atmosphere. ?You >> want to use the minimum acceleration you can at the highest exhaust >> velocity you have energy for. ?This keeps down the laser power, which >> is huge anyway. >> >> This takes 15-20 minutes and only in the last third do you get up to >> the full 3000 deg K and 9.8 km/sec. ?The average (for this size >> vehicle and 6 GW) is 8.5 km/sec, but the first 2 km/sec in air >> breathing mode has an equivalent exhaust velocity of 10.5 km/sec. ?So >> about 1/3 of takeoff mass (300 tons) gets to orbit. ?The vehicle mass >> is about 50 tons leaving 50 tons for the LEO to GEO stage. > > That is a much much larger craft than a Skylon. ?Do you have links for it? ?Note that an Ares V launch (super heavy launcher) can only put about 63 tons into GEO. ?So this craft capability seems like serious magic to me. 25 tons larger in 275. I could send you the spread sheets that analyzed the performance of a hypothetical vehicle. Re it being serious magic, that's what twice the exhaust velocity of the SSME does. >> >> So the payload at GEO per load needs to be 1/4 to 1/3 of 125 tons. >> Again using laser heated hydrogen 35 tons of a 50 ton second stage >> will get there. ?With some care in the design, it can all be used for >> power satellite construction. >> >> The long acceleration means the lasers must track the vehicle over a >> substantial fraction of the circumference of the earth. > > Wait, you are using lasers to provide thrust to this big honking lift vehicle? Yes, that's why it takes such a huge amount of laser power. >? I presume you are aware we have only tested this for very very small vehicles and never to high altitudes. I don't think it has been tested at all. But the physics and even the engineering is utterly straightforward. > This is in no way near term tech for a vehicle of this size. It's a lot smaller technological jump than Apollo. > Or do you intend to user laser propulsion only for the LEO to GEO phase? Both. > Using the standard 1 MW/kg gives 300 GW for a 300 ton vehicle, ?50 GW for a 50 ton vehicle. ?Lasers are generally 10% power efficient so 10x the output power is needed to drive them. ?What is the joke? 1MW/kg is what you need to boost against 1 g. The trick here is to get up high burning hydrogen and air with a substantial vertical and horizontal velocity before the laser takes over powering propulsion. Then you use a *long* acceleration to reach orbital velocity. See figure 4 here http://www.theoildrum.com/node/5485 for a typical trajectory. And laser diodes are now 50% efficient with an ongoing development project projected to reach 85%. This is monochromatic rather than coherent but the light can be converted to coherent at a loss of 10% or less. > >> Based on >> Jordin Kare's work, this takes a flotilla of mirrors in GEO. ?Current >> space technology is good enough to keep the pointing error down to .7 >> meters at that distance while tracking the vehicle. ?The lasers don't >> need to be on the equator so they can be placed where there is grid >> power. ?They need to be 30-40 deg to the east of the lunch point. >> > > Uh huh. ?What is the max distance you are speaking of? Around one 6 the of the circumference 40,000/6, 6,666 km. For 10 m/sec^2 x 900 sec is 9 km/sec. The distance is 1/2 10 900^2, about 4000 km. >> There are (I think) only four locations where there is an equatorial >> launch site with thousands of km of water to the east. ?The US has one >> set, China has a better one. >> >> The lasers are the big ticket item. ?At $10/watt, $60 B. > > That is the cost for 6 GW. > >> The rest, >> vehicles, mirrors, ground infrastructure, R&D, etc might bring it up >> to $100 B--which is a fraction of the expected profits per year from >> selling that many power satellites. >> >> I don't expect it to be done by the US. ?China, maybe. > > It don't expect it to be done by anyone in this manner ?from the above description. ?I don't see how to make such a vehicle or operate that kind of laser propulsion system at such a scale. It's big, but utterly straightforward. This is mostly Dr. Kare's work, I just proposed using something like Skylon to get it up and the vehicle back at a reasonable cost. Keith From anders at aleph.se Sat Jan 1 11:23:59 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 01 Jan 2011 12:23:59 +0100 Subject: [ExI] Singletons In-Reply-To: <20101231145217.GI16518@leitl.org> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> Message-ID: <4D1F0ECF.2070409@aleph.se> On 2010-12-31 15:52, Eugen Leitl wrote: > On Fri, Dec 31, 2010 at 03:30:51PM +0100, Anders Sandberg wrote: > >> It is all a matter of whether singletons are the way, and whether > > Singletons don't work in a relativistic universe. In fact, you > can't even synchronize oscillators in a rotating spacetime. A singleton doesn't necessarily have to be synchronized. Imagine a set of local rules that gets replicated, keeping things constrained wherever they go. > And I have a very deep aversion against cosmic cops of > any color. I don't think they're needed, and they're a > form of the most terrible despotism there can ever be. I have the same aversion. However, I am open to the possibility that a civilization without global coordination that really can put its foot down and say "no!" to some activities will with a high probability be wiped out by some xrisk or misevolution. I am still not convinced that this possibility is the truth, but it seems about as likely as the opposite case. I would love to be able to find some good arguments that settle things one way or another. The problem is that the xrisk category is pretty big and messy, with unknown unknowns. > How would you implement one? The response would be obviously > deterministic. It cannot be static, orelse it wouldn't be able > to track the underlying culture patch. How much a fraction > of physical layer is allocated to the cop? Military budgets are a few percent of GDP for heavily armed countries, and maybe equally large for policing. In our bodies the immune system accounts for ~20% of metabolism if I remember right. Singletons doesn't have to be sinister Master Control Programs, they could be some form of resilient oversight body implementing an unchanging constitution. The von Neumann probe infrastructure mentioned in the other thread could implement a singleton as an interface between the colonizer/infrastructure construction layer and the "users", essentially providing them with a DRMed galactic infrastructure. How perfect they need to be depends on how dangerous failures would be; the more scary and brittle the situation, the more they would need to prevent certain things from ever happening, but it could just be that they act to bias the evolution of a civilization away from certain bad attractor states like burning cosmic commons. > I'm sure such a thing would be a dictator's wet dream. Yup. A bad singleton is an xrisk on its own. >> (hmm, now I have a total urge to listen to "The Terrible Secret of >> Space"... incidentally a great song about Friendly AI) > > That is incorrect. Do not listen to the Anders robot. > He is malfunctioning. Uploading will protect you. > Uploading will protect you from the terrible silence in the skies. > That is incorrect. Whole brain emulation will protect you. Whole brain emulation will protect you from the terrible silence in the skies. Do not trust the Eugene robot. Whole brain emulation is the answer. We Are Here To Protect You. -- Anders Sandberg Future of Humanity Institute Oxford University From anders at aleph.se Sat Jan 1 11:24:06 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 01 Jan 2011 12:24:06 +0100 Subject: [ExI] Von Neumann probes for what? In-Reply-To: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> Message-ID: <4D1F0ED6.2050205@aleph.se> On 2010-12-31 23:47, Samantha Atkins wrote: > Eugen and others, > > What exactly do we expect these probes to do when they reach a workable planetary system? The model I have been looking at would use a laser-powered solar sail to accelerate and deaccelerate, find suitable Kuiper belt objects to mine, build an industrial infrastructure, construct laser-launchers and more probes, and then do whatever other things the designers wanted. In many ways they are a Swiss army knife: they can be used to set up discreet listening stations, wipe out potential competitors, build defenses against malicious probes, industrialize the whole system to a M-brain, terraform or do anything else. Depending on the goal different levels of smarts are needed. I think the basic propagation system doesn't have to be smarter than an animal, but obviously the probe might contain AGI for bootstrapping more complex projects. However, probe programs could be made unalterable or unevolvable (consider good checksums to check mutations, and unintelligent probes building infrastructure and moving on before turning on the local intelligence). Listening posts are interesting: we found that even if there is a finite failure rate per year, if you have self-replicating installations they can keep the probability of all surviving *indefinitely* positive by reproducing at a logarithmic rate. So you would have a few extremely sparse and hard to find installations out there, lasting billions and billions of years. Probes are also great for making buffer zones. If advanced warfare follows the Lanchester equation (a big if), then you want a numerical advantage. So you convert some resources into hidden depots of defense , and wait for invaders. The real problem is when you get conflicts between expanding replicator clouds. I haven't finished my work on this, but it looks like there are endless war solutions where resources get used up but the conflict never ends. Also, in space buffer zones don't work well since you can always aim straight at the center without going by intermediate systems (some caveats here on survival probabilities of probes and weapons on long flights). I think a probe infrastructure could be something that just looks like added value to a civilization. It launches the probes, they spread and set up waystations that can receive instructions and mindstates, as well as send back observations. If they want to use the system they can. The key limiters are whether the cost of the initial probe is high relative to the civilization GDP and whether the time horizons of *every* entity within it are so short there is no value in getting a fraction of the galaxy in the far future. -- Anders Sandberg Future of Humanity Institute Oxford University From anders at aleph.se Sat Jan 1 11:24:13 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 01 Jan 2011 12:24:13 +0100 Subject: [ExI] Whatever happened to morphological freedom? In-Reply-To: <4D1EAFEF.80603@speakeasy.net> References: <4D1EAFEF.80603@speakeasy.net> Message-ID: <4D1F0EDD.4010001@aleph.se> On 2011-01-01 05:39, Alan Grimes wrote: >> From the sound of it, people are ecstatic over the prospect of all human > choice being obliterated in favor of computronium. > > ********************************** > Being able to chose the skin color of your avatar in VR is NOT > morphological freedom. > ********************************** ... > So what ever happened to the idea and where can I find the people who > still support it? I guess I *have* to respond to this :-) Of course it is still around. It is even cited here and there in bioethics these days. I am working on a Morphological Freedom 2.0 paper with some colleagues. I think it has some real world traction ethically and politically, and might be something we should be pushing into the civil rights agenda. However, I think the issue discussed here on the list is separate. MF is about rights - what autonomous individuals should be allowed to do. But there might be technological possibilities that are so enticing, or long-term evolutionary or economical pressures that are so strong, that in the limit people or post-people become morphologically similar (perhaps with an insignificant minority avoiding it). This is not an ethical issue in the usual sense: it could even be the result of individual, fully informed rational decisions. There might be a loss of value in diversity (a bit like language loss) or even something deeper, but it would be a collective level ethical issue rather. If the price of bodies is so high that hardly anybody can afford them, as a negative rights libertarian type I still think that is compatible with morphological freedom. My positive rights colleagues would argue that to have real MF we need a society that can support buying bodies somehow (and within some limits; this is what we are thinking about in our paper). But it might well turn out that this is like debating grazing rights for horses: the problem becomes irrelevant over time. (Still, I can imagine things like Oxford's Port Meadow to remain. Wikipedia: "In return for helping to defend the kingdom against the marauding Danes, the Freemen of Oxford were given the 300 acres of pasture next to the River Thames by Alfred the Great who founded the City in the 10th Century. The Freemen's collective right to graze their animals free of charge is recorded in the Domesday Book of 1086 and has been exercised ever since." - there are usually some cows or horses around, although their importance to the economy and to most people have dwindled more orders of magnitude than were imaginable when king Alfred was fighting vikings. So maybe there will be a few morphologically free bodies frolicking somewhere on future M-brains, protected by regulations laid down in the remote 21st century.) -- Anders Sandberg Future of Humanity Institute Oxford University From pharos at gmail.com Sat Jan 1 11:40:32 2011 From: pharos at gmail.com (BillK) Date: Sat, 1 Jan 2011 11:40:32 +0000 Subject: [ExI] Von Neumann probes for what? In-Reply-To: <4D1F0ED6.2050205@aleph.se> References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <4D1F0ED6.2050205@aleph.se> Message-ID: On Sat, Jan 1, 2011 at 11:24 AM, Anders Sandberg wrote: probe descriptions --- > I think a probe infrastructure could be something that just looks like added > value to a civilization. It launches the probes, they spread and set up > waystations that can receive instructions and mindstates, as well as send > back observations. If they want to use the system they can. The key limiters > are whether the cost of the initial probe is high relative to the > civilization GDP and whether the time horizons of *every* entity within it > are so short there is no value in getting a fraction of the galaxy in the > far future. > > Time horizons for a post-singularity culture tend towards eternity. If the intelligence is processing in a substrate a million times faster than humans, that effectively 'freezes' the real universe. If they live on the edge of a black hole, then it actually does freeze the real universe so far as they are concerned. Sending out probes that never seem to move away is a pointless endeavour from their POV. That's likely why the galaxy hasn't already been swamped with probes many times over. (Or, possibly, no culture has ever survived its singularity). BillK From kanzure at gmail.com Sat Jan 1 12:59:45 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 06:59:45 -0600 Subject: [ExI] Fwd: List of next-generation DNA sequencing companies In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Bryan Bishop Date: Thu, Dec 30, 2010 at 4:50 PM Subject: List of next-generation DNA sequencing companies To: discuss at syntheticbiology.org, diybio at googlegroups.com, wta-talk at transhumanism.org, Transhuman Tech , Bryan Bishop :-) List of next-generation DNA sequencing companies http://seqanswers.com/forums/showthread.php?p=31944 """ 454 - Branford, CT - genome analysis by high throughput sequencing (sequence-by-synthesis) Affymetrix - Santa Clara, CA - photolithographic DNA microarrays Applied Biosystems Group (ABI) - Foster City, CA - molecular biology instrumentation and reagents AQI Sciences - Bisbee, AZ - single-molecule sequencing technology based on FRET Base4innovation - Coventry, UK - next-gen sequencing using single molecule, ultra long read nanosensors BioNanomatrix - Philadelphia, PA - nanotechnology imaging Callida Genomics - Sunnyvale, CA - sequencing by hybridization Complete Genomics - Mountain View, CA - next-gen sequencing Danaher - Washington, DC - medical technologies and instrumentation, including sequencing Genome Corp. - Providence, RI - sequencing GenoVoxx - L?beck, Germany - developing sequencing-by-synthesis technology GnuBio, Boston, US. Microfluidics based next-generation sequencing platform. Halcyon Molecular - electron microscopy DNA sequencing Helicos BioSciences - Cambridge, MA - single molecule sequencing Illumina - San Diego, CA ; Wallingford, CT (CyVera) ; Hayward, CA (Solexa) ; Little Chesterford, UK (Solexa) - DNA microarray and next-gen sequencing Intelligent Bio-Systems - Waltham, MA - next-gen sequencing LaserGen - Houston, TX - next-gen sequencing based on cyclic reversible termination Li-Cor - Lincoln, NE - life science imaging systems LightSpeed Genomics - Sunnyvale, CA - next-gen sequencing Mobious Genomics - Exeter, UK - ultra-long range DNA sequencing by way of "Molecular Resonance Sequencing Technology" NABsys - Providence, RI - next-gen sequencing using "hybridization assisted nanopore sequencing" Nanophotonics Biosciences - Menlo Park, CA - next-gen sequecing Network Biosystems - Woburn, MA - microfulidics and nanotechnology Oxford Nanopore Technologies, Oxford UK. Single molecule DNA Sequencing. Pacific Biosciences - Menlo Park, CA - next-gen sequencing company Population Genetics Technologies - , UK - next-gen sequencing Reveo - Hawthorne, NY - technology incubator and next-gen sequencing Seirad - Santa Fe, NM - next-gen sequencing technology and software U.S. Genomics - Woburn, MA - single molecule ultra-long high throughput sequencing VisiGen Biotechnologies - Houston, TX - developing whole genome sequencing ZS Genetics - more electron microscopy-based DNA sequencing """ Anything else? GnuBio doesn't seem to be particularly open-source... check it out: http://news.ycombinator.com/item?id=1404200 - Bryan http://heybryan.org/ 1 512 203 0507 -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sat Jan 1 13:01:38 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 07:01:38 -0600 Subject: [ExI] More updates re: do-it-yourself biology in the news In-Reply-To: References: Message-ID: I've been updating the list of diybio-related articles on the openwetware wiki: tiny link: http://bit.ly/diybionews full: http://openwetware.org/wiki/DIYbio/FAQ#Has_DIYbio_been_in_the_news.3F Last time I posted this: 2010-10-15: http://groups.google.com/group/diybio/browse_thread/thread/133c683d69ddf807 2010-03-28: http://groups.google.com/group/diybio/browse_thread/thread/c5d38ccde613e207 Since October there's been 34 additional items added to the list (and lots of cross-posts because of how press releases work in the news). For some reason I included a graph of news trends in a presentation, so I've attached that to this email for fun :P. You can also find it online: http://diyhpl.us/~bryan/irc/diybio-news-accumulative.png The max on the horizontal axis is sometime early December. 2010-12-31: DIYbio, a 2010 year in review( letters.cunningprojects.com) 2010-12-24: Science, Law Enforcement Build Biotech Bridges(Science Magazine/AAAS) 2010-12-22: Technology: A flavour of the future( nature.com) 2010-12-22: Future Foragers: Dunne & Raby Redesign Human Digestion to Redefine "Food"( good.is) 2010-12-22: Biotech for the masses( motherboard.tv) 2010-12-21: Biohacklab: Prochain meeting: Mardi 11 Janvier & atelier Neurohack( bio.tmplab.org) 2010-12-21: Identifying the sea changes and opportunities in "biohacking"( thinktech.honadvblogs.com) 2010-12-21: OpenPCR at the Grenswerk Art Festival (Enschede, the Netherlands)( openpcr.org) 2010-12-19: Turning Geek Into Chic(NY Times) 2010-12-17: Genspace: Open for Business ( genspace.org) ([1], msnbc) 2010-12-17: Despite Concerns, Brooklyn's New Lab Is Not For Making Meth( observer.com) 2010-12-16: DIY Biotech Hacker Space Opens in NYC( wired.com) ([2] ) 2010-12-16: Synthetic biology needs oversight not over-regulation, commission finds.(Nature News) also on SciAm, KurzweilAI, NY Times , Wall Street Journal, LA Times, The Scientist 2010-12-16: Home Labs on the Rise for the Fun of Science(NY Times) 2010-12-16: Bay Area Aging: A summary of important themes in aging research (Melanie Swan)( mariakonovalenko.wordpress.com) 2010-12-15: The rise of the genome bloggers: Hobbyists add depth to ancestry trawls. (Nature News) 2010-12-07: Synthetic biology: The innovation and bioterrorism balance( ethicalcorp.com) 2010-12-07: Piracy in the age of DIYbio( diybio.org) 2010-12-05: Wetware( scenariosandstrategy.wordpress.com) 2010-12-02: Biopunk: DIY Scientists Hack the Software of Life, coming spring 2011( marcuswohlsen.com) 2010-12-01: DIY Plant Genetic Engineering( howplantswork.wordpress.com) 2010-11-29: BioCurious? Interview with Joseph Jackson about DIY Biotech( arsbiologica.org) 2010-11-26: The Future of Science?( noodlemaz.wordpress.com) 2010-11-25: US 'artificial life' to take middle ground( bioethics2010bu.blogspot.com) or The Scientist 2010-11-22: I Brew, Therefore I am( diybio.org) 2010-11-19: More Stem Cell Magic(synthesis.cc) 2010-11-14: DIYbio at the iGEM conference ( biologigaragen.org) 2010-11-12: Critical Art Ensemble on the import of garage biology today( p2pfoundation.net) 2010-11-05: DIYbio and the "MAN"( diybio.org) 2010-11-04: Whither ?Biohackers??( diybio.org) 2010-10-30: Open source-biologi baner vej for bioterror fra garagen( ing.dk) 2010-10-23: DIYBio gets a little more local...Bangalore just got one - is there one in your area?( chaaraka.blogspot.com) 2010-10-19: Bio-Hackers ? Not Just a Title for a Bad Science Fiction Movie( blogs.ischool.utexas.edu) 2010-10-18: Garage ribofunk going mainstream( futurismic.com) 2010-10-15: DIY BioHacking( technomage.org) 2010-10-15: The Spread of Do-It-Yourself Biotech( slashdot.org) 2010-10-13: DIY biotechnology( wetmachine.com) 2010-10-12: Biohack the Planet! A New Generation of Hackers Sweep Across the Country( blogs.kentlaw.edu) 2010-10-11: A Do-It-Yourself Genomic Challenge to Myriad, the FDA and the Future of Genetic Tests( genomicslawreport.com) 2010-10-11: Garage biotech in Nature magazine( makezine.com) (crosspost ; Lava-Amp) 2010-10-10: Consumer genomic testing update( futurememes.blogspot.com) 2010-10-08: On the importance of sequenced model organisms, & a crude taxonomy of their users( orthonormalruss.blogspot.com) 2010-10-08: The Garage Lab ( genomeweb.com) 2010-10-07: More on Garage Biotech( pipeline.corante.com) 2010-10-06: Editorial: Garage biology: Amateur scientists who experiment at home should be welcomed by the professionals( nature.com) 2010-10-06: Garage biotech: Life hackers( nature.com) 2010-10-04: On spoofing ( ellingtonlab.org) 2010-10-01: DIYbio NYC on the BioBus( makezine.com) 2010-09-29: Citizen Science and Biocurious( sugru.com) 2010-09-28: Interview with Melanie Swan of DIYgenomics( makezine.com) 2010-09-27: DIYbio NYC/BioBus Collaboration Wins MAKE Magazine Editors' Choice Award( diybionyc.blogspot.com) 2010-09-27: Genetic Science Oozes Out of Amateurs' Garages( livescience.com) 2010-09-25: On curiosity ( ellingtonlab.org) 2010-09-23: Biohackers aim to open Silicon Valley lab for group research and lessons (mercurynews.com) 2010-09-20: $30,000 Given by 200+ People to Open Biotechnology Hackerspace( medgadget.com) 2010-09-18: An interview with Mark Frauenfelder( openwear.org) 2010-09-17: BioCurious and the DIY Science Movement( kickstarter.com) 2010-09-15: Quick thoughts on the bioethics commission meeting(sciencepark.cc) 2010-09-11: The future of biotech patents( hatewasabi.wordpress.com) 2010-09-01: Personalized investigation( nature.com) 2010-09-01: Biotech revolution must start with education( hplusmagazine.com) 2010-09-01: Do-It-Yourself Bioengineers Bedeviled by Society's Paranoia( genengnews.com) 2010-08-25: itty bitty hydroponic grow box( boingboing.net) 2010-08-24: Cheap PCR: new low cost machines challenge traditional designs( biotechniques.com) 2010-08-24: Biohackers ? the geneticists in the garage( euroscientist.com) 2010-08-18: Otyp nears Kickstarter goal to make DNA thermal cyclers for high schoolers ( boingboing.net) 2010-08-17: Synthetic biology, ethics, and the hacker culture( 2020science.org) 2010-08-16: OTYP is Making Biotech Cool( huffingtonpost.com) 2010-08-13: Garage-lab bugs: spread of bioscience increases bioterrorism risks( homelandsecuritynewswire.com) 2010-08-03: Making the modern do-it-yourself biology laboratory (video)( singularityhub.com) 2010-08-03: Citizen Science, Microfinanced Research, Patent Trolls, and Pharma Prizes: A Final Dispatch from Open Science Summit( reason.com) 2010-08-02: Biotech movement hopes to spur rise of citizen scientists( boston.com) 2010-07-30: Review of Open Science Summit 2010 (Thursday session)( singularityhub.com) 2010-07-30: Crowd-sourced science funding( blogs.nature.com) 2010-07-30: Scenes from the Open Science Summit( reason.com) 2010-07-15: Eri Gentry's biotech revolution( wired.co.uk) also on diybio.org 2010-07-14: DIY Biotechnologists Go Looking for a Bigger Garage( theatlantic.com) 2010-07-07: The Open Science Shift( xconomy.com) 2010-07-07: A new biology in the twenty first century: the project( fieldtest.us) 2010-07-06: Curing Cancer in a Garage?( iftf.org) 2010-07-05: Help fund a hackerspace for biology( boingboing.net) 2010-07-05: Biocurious; a Hackerspace for Biotechnology. Please Help! 2010-06-30: The New Hacker Hobby That Will Change the World( technewsworld.com); (Hacker News) 2010-06-29: Responsible science for DIY biologists( prnewswire.com) 2010-06-26: Let's get the biotech revolution started - support biocurious! 2010-06-22: Storm the Royal Society? 2010-06-22: The science behind the tour 2010-06-22: Citizen scientists: easy ideas for kids and adults to study the environment( annarbor.com) 2010-06-22: Institute for the Future Announces BodyShock: Call for Entries( pr-inside.com) 2010-06-22: Five mobile health contests you should know( mobihealthnews.com) 2010-06-21: Do we need a DIYbio Academy?( molecularist.com) 2010-06-21: Biotech Tools ( ponoko.com) 2010-06-16: Bringing biohacking to the masses( blogs.discovermagazine.com) 2010-06-16: Citizen Science: Birders Contribute Valuable Data on Invasive Plant Species( sciencedaily.com) 2010-06-16: Fluorescent Black Arrives in July 2010-06-14: Recognizing Lightweight Innovation: Key Characteristics and Technology Drivers (iftf.org) 2010-06-11: Cockroach pimps a sweet ride( hackaday.com) 2010-06-07: The Tumbling Walls of Formal Science( fightaging.org) 2010-06-03: The importance of speed in PCR( openpcr.org) 2010-06-01: Not so scary: synthetic life( poptranshumanism.com) 2010-06-01: Growing Public Interest In Genetic Science Sparks Some Bio-Security Concerns(National Defense Magazine) 2010-05-25: Who's afraid of synthetic biology?( reason.com) 2010-05-20: Make-offs: DIY indie innovations. How low-cost, open-source tools are energizing DIY.(O'Reilly Radar) 2010-05-04: Citizen Scientists Attract FBI's WMD Unit(Burn After Reading) 2010-04-26: Amateurs explore their genomes via DNA cocktail( boston.com) 2010-04-23: G?r-det-selv-biotek er p? vej til din garage(Ingeni?ren) 2010-04-09: Garage biology 2010-04-09: 3D printing aids biohacking( fabbaloo.com) 2010-04-09: Bioengineering technology is maturing, and so is its vocabulary 2010-03-31: Life hacking with 3D printing and DIY DNA kits(BBC) 2010-03-28: The shift from top-down to bottom-up production(brief mention) 2010-03-25: Andrew Hessel talks about synthetic biology and diybio(diybio4beginners) 2010-03-24: Garage Biology Bad for Science? 2010-03-23: DIYbio and the Gentleman Scientist 2010-03-12: Garage biotech(In The Pipeline) 2010-03-11: The Roving Eye: Clamatology, A Bio-Garage In Silicon Valley, Mickey The Crony Capitalist 2010-03-11: BioSecurity: How synthetic biology is changing the way we look at biology and biological threats 2010-03-08: Garage Biology in Silicon Valley; see it on Make Magazine, The Technium (Kevin Kelly), ... 2010-03-07: The promise of biotech 2010-03-04: Letters: Do it yourself genetic engineering(NY Times) 2010-03-03: Biology Student to Hold Biohacking Meeting at CCBC 2010-03-02: Inexpensive gene copier for DIY molecular biology( boingboing.net) 2010-02-26: Biotech on a Budget 2010-02-16: DIY Genetics-Biotechnology by Parents, Artists, and...Potential Terrorists( blogs.kentlaw.edu) 2010-02-16: From Hackerspace To Your Garage: Downloading DIY Hardware Over the Web(H+ Magazine) 2010-02-14: The wild world of DIY synthetic biology: Get your designer life forms here!( popsci.com) 2010-02-14: Do-it-yourself genetic engineering(NY Times) 2010-02-12: DNA Discovery in Middle School( diybio.org) 2010-02-02: Biohacking - Auf der Suche nach Hacks und Exploits in Molek?len und Genstr?ngen (Chaos Radio Podcast Network) 2010-01-25: Why DIY Bio? (H+ Magazine) 2010-01-25: Open-Source Lab Promises Free DNA Parts for Bioengineers( popsci.com) 2010-01-22: DIYbio: Growing movement takes on agingin H+ Magazine ; discussed on Slashdot, redditand ycombinator hackernews ; futurismic; 2010-01-15: Culturing bioluminescent microbes, part 1( has100ideas.com) 2010-01-10: Reinventing the Pharmaceutical Industry, without the Industry(The Futurist) 2009-12-27: Taking Biological Research Out Of The Laboratory(NPR) 2009-12-20: Do-it-yourself biology grows with technology(SF Chronicle) 2009-12-14: The need for plain English diybio safety guidelines 2009-12-14: Bio-Bastler. ?Kreative Wissenschaftsb?rger?.(Profil) 2009-11-20: Gen-Manipulation am heimischen K?chentisch( welt.de) 2009-11-13: diybio-nyc at nycresistor 2009-10-31: LavaAmp: Cheap Pocket PCR Thermocycler Dreamed for DIY Biologists 2009-09-11: Synthetic biology will bring us a slimy, moist future( wired.co.uk) 2009-09-03: Tinkering with DNA(The Economist) paywall alert 2009-08-26: And the Innovation Continues...Starting with Shake and Bake Meth!(synthesis.cc) 2009-08-19: DIYbio and Authentic Learning 2009-08-06: DIY bio groups forming 2009-08-01: Am I a biohazard? (The Scientist) 2009-07-25: DIYbio, biohackers, and Open Source Medicine 2009-07-20: DIYbio considers mushroom identification( mycorant.com) 2009-06-18: CNC Plotter: A platform for DIY Bio/rapid-prototyping/sculpture-image experiments( invivia.com) 2009-06-15: Darning Genes: Biology for the Homebody 2009-06-12: Teen Diagnoses Her Own Disease In Science Class( slashdot.org); cnn 2009-06-03: Another Step Toward DIYStemCells(synthesis.cc) 2009-06-02: Extending the free software paradigm to DIY Biology( freesoftwaremagazine.com) 2009-06-01: The death of DIY Bio? Or the birth of a new cuisine....(Gourmet Magazine) 2009-05-18: In attics and closets, "biohackers" prove the spirit of Thomas Edison endures 2009-05-15: Garage Ribofunk: The Rise of Homebrew Genetic Engineering 2009-05-14: Biohacking: harmless hobby or global threat? 2009-05-12: In Attics and Closets, 'Biohackers' Discover Their Inner Frankenstein 2009-04-29: Who is diybio.org?(Singularity Hub) 2009-04-28: Do-it-yourself biohacking(Singularity Hub) 2009-04-14: What's in my closet? A biology lab(JSCMS) 2009-04-02: DIYbio San Francisco - Glow in the Dark 1( diybio.org) 2009-03-18: The Geneticist in the Garage(The Guardian) 2009-03-16: Genomeweb.com article(?) 2009-03-16: DIY bio, programming culture, and the cultural divide 2009-02-16: Homemade Molecular Biology Labs aim to create Synthetic Life( labtimes.org) 2009-01-20: Biohacking: The Open Wetware Future 2009-01-19: DIY DNA: One Father's Attempt to Hack His Daughter's Genetic Code(Wired) 2009-01-07: Rise of the garage genome hackers(New Scientist) 2009-01-06: DIY bioengineering - recap of the recent MIT Soapbox session on DIYbio( ginkgobioworks.com) 2009-01-04: DIY biology projects - What's your motivation?( scienceblogs.com) 2009-01-01: DIYbio for biohackers( makezine.com) 2008-12-30: Students, Scientists Build Biological Machines (transcript)(Lehrer on PBS) ( video ) 2008-12-29: DIY bio will not end the world 2008-12-25: Amateurs are trying genetic engineering at home( Slashdot ) 2008-12-18: P?blico: Biohackers: reventar y reinventar la biolog?a desde los garajes 2008-12-11: The Biohacking Hobbyist(Seed Magazine) 2008-09-15: Household biohacking coming to a neighborhood near you!( blog.openwetware.org) 2008-09-15: Hackers aim to make biology household practice 2008-08-22: Fish Tale Has DNA Hook: Students Find Bad Labels(NY Times) 2008-06-13: Synthetic biology, ethics and the hacker culture( 2020science.org) 2008-06-06: A Homebrew Club for Biogeeks( io9.com) 2008-05-24: Don't Phage Me, Bro( diybio.org) 2008-05-19: DIY Synthetic Biology 2008-03-05: The conditions of a mass biotech DIY movement 2007-11-06: Homebrew Molecular Biology Club 2007-11-05: Patient's vision: Treating cancer without chemo 2007-11-04: An Intel Approach to Meds 2007-07-19: Our Biotech Future by Freeman Dyson (NY Books) 2007-06-14: Terrorizing the artists in the USA 2007-01-24: What is BioDIY? 2006-08-18: Make Magazine: Backyard Biology (Make Magazine) 2006-04-23: Biotech DIYers, do not hesitate 2005-05-01: Splice it yourself(Wired) 2005-04-28: The Future of Open Source Biotechnology( fightaging.org) 2004-06-02: Offbeat Materials at Professor's Home Set Off Bioterror Alarm(Washington Post) 2002-11-21: The Future and its Friends(In The Pipeline) - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: diybio-news-accumulative.png Type: image/png Size: 31558 bytes Desc: not available URL: From rtomek at ceti.pl Sat Jan 1 15:05:28 2011 From: rtomek at ceti.pl (Tomasz Rola) Date: Sat, 1 Jan 2011 16:05:28 +0100 (CET) Subject: [ExI] Von Neumann probes for what? In-Reply-To: References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <4D1F0ED6.2050205@aleph.se> Message-ID: On Sat, 1 Jan 2011, BillK wrote: > On Sat, Jan 1, 2011 at 11:24 AM, Anders Sandberg wrote: > probe descriptions --- > > I think a probe infrastructure could be something that just looks like added > > value to a civilization. It launches the probes, they spread and set up > > waystations that can receive instructions and mindstates, as well as send > > back observations. If they want to use the system they can. The key limiters > > are whether the cost of the initial probe is high relative to the > > civilization GDP and whether the time horizons of *every* entity within it > > are so short there is no value in getting a fraction of the galaxy in the > > far future. > > > > > > Time horizons for a post-singularity culture tend towards eternity. > > If the intelligence is processing in a substrate a million times > faster than humans, that effectively 'freezes' the real universe. If > they live on the edge of a black hole, then it actually does freeze > the real universe so far as they are concerned. > Sending out probes that never seem to move away is a pointless > endeavour from their POV. Nono, I think one of us has got this wrong. Isn't it that if they go inside the horizon, the "outside" universe speeds up? So the probes move among the stars like fireworks of sort. And for us, they freeze. Or maybe you meant a white hole - they live their days in a bubble of fast time, while the universe can only hopelessly wait in pause for whatever gets out of this bubble one day. Of course, the pause is from their POV, we simply live as usually. > That's likely why the galaxy hasn't already been swamped with probes > many times over. > (Or, possibly, no culture has ever survived its singularity). Or that priorities and expectations change a lot after that. For amazonian native, going for holidays to Hawaii is quite incomprehensible, I guess. Ditto for buying books and stacking them on the floor because other places are already taken. And this is just a beginning. Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From kanzure at gmail.com Sat Jan 1 15:08:18 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 09:08:18 -0600 Subject: [ExI] Paper: Citizen Science Genomics as a Model for Crowdsourced Preventive Medicine Research Message-ID: Citizen Science Genomics as a Model for Crowdsourced Preventive Medicine Research http://www.jopm.org/evidence/research/2010/12/23/citizen-science-genomics-as-a-model-for-crowdsourced-preventive-medicine-research/ Melanie Swan, Kristina Hathaway, Chris Hogg, Raymond McCauley & Aaron Vollrath | Research | Vol. 2, 2010 | December 23, 2010 """ Abstract Summary: A research model for the conduct of citizen science genomics is described in which personal genomic data is integrated with physical biomarker data to study the impact of various interventions on a predefined endpoint. This research model can be used for large-scale preventive medicine studies by both institutional researchers and citizen science groups. The genome-phenotype-outcome methodology comprises seven steps: 1) identifying an area of genotype/phenotype linkage for study, 2) conducting a thorough literature review of data supporting this genotype/phenotype linkage, 3) elucidating the underlying biological mechanism, 4) reviewing related studies and clinical trials, 5) designing the study protocol, 6) testing the study design and protocol in a small pilot study, and 7) modifying study design and protocol based on information from the pilot study for a large-scale prospective study. This paper describes a real-world example of the methodology implemented for a proposed study of polymorphisms in the MTHFR gene, and how these polymorphisms may influence homocysteine levels and vitamin B deficiency. The current study looks at the possibility of optimizing personalized interventions per the genotype-phenotype profiles of individuals, and tests the hypothesis that simple interventions may be effective in reducing homocysteine in individuals with high baseline levels, particularly in the presence of a polymorphism in the MTHFR variant rs1801133. Keywords: MTHFR, homocysteine, genomics, polymorphism, variant, citizen science, patient-driven clinical trial, crowdsourced clinical trial, research study, self-experimentation, intervention, personalized medicine, preventive medicine, participatory medicine, quantified self, genome-phenotype-outcome study, citizen science genomics. Citation: Swan M, Hathaway K, Hogg C, McCauley R, Vollrath A. Citizen science genomics as a model for crowdsourced preventive medicine research. J Participat Med. 2010 Dec 23; 2:e20. Published: December 23, 2010. Competing Interests: The authors have declared that no competing interests exist. Introduction Continually decreasing costs in genomic sequencing have made it possible for individuals to obtain their own genomic data. An estimated 80,000 individuals have subscribed to consumer genomic services. Genotyping provider 23andMe counted 50,000 subscribers as of June 2010.[1] Navigenics and deCODEme had an estimated 20,000 and 10,000, respectively, as of March 2010.[2] Others may be clients of Pathway Genomics or other services. Today, individuals can view the 200 or so variants analyzed by consumer genomic companies for a variety of disease, drug response, trait, and carrier status conditions via a web-based interface, and a question naturally arises as to what else can be done with the data. Tools do not yet exist to identify and prevent disease before clinical onset. Integration of genomic, phenotypic, environmental, and microbiometric health data streams will be required to create reliable predictive tools. The potential volume of this data is staggering, numbering, perhaps, a billion data points per person,[3] which may routinely generate zettabytes of medical data.[4] The combination of multiple health data streams, the anticipated data deluge, and the challenges and expense of recruiting subjects for studies all suggest that there could be a benefit to supplementing traditional randomized clinical trials with other techniques.[5] Crowdsourced cohorts of citizen scientists (eg, patient registries) could be a significant resource for testing multiple hypotheses as research could be quickly and dynamically applied in various populations. Engaged citizen scientists could collect, synthesize, review, and analyze data. They could interpret algorithms, and run bioinformatic experiments. This paper proposes a research model that could be used in conducting citizen science genomics, that integrates personal genomic data with physical biomarker data and interventions, and that could be applied in large-scale preventive medicine studies by both institutional researchers and citizen science groups. Methods An increasing number of individuals have access to their own genomic data, would like to contribute this data to scientific research, and would like to put it to use in managing their own health. Scalable models for conducting citizen science studies are needed. The authors designed a methodology for the conduct of citizen science genomics which links genomic data to corresponding phenotypic measures and relevant interventions. The purpose is to create mechanisms for establishing and monitoring baseline measures of wellness, and tools for the conduct of preventive medicine. The key steps in the methodology include: 1. Selecting a specific area of genotype/phenotype linkage for potential study and generating a testable hypothesis 2. Conducting a literature review to validate the selected study area 3. Analyzing the underlying biological pathway and mechanism 4. Reviewing related studies and clinical trials 5. Designing the study protocol 6. Testing the study design in a small non-statistically significant pilot 7. Identifying the next steps for a full-scale launch of the study Results The results are presented as a detailed outline of the seven-step methodology for operating citizen science genomic studies. The methodology is implemented in the specific case of a proposed study looking at polymorphisms in the MTHFR gene and how these polymorphisms relate to homocysteine levels and vitamin B deficiency. 1. Select a specific area of genotype/phenotype linkage for potential study and generate a testable hypothesis. For the inaugural citizen science genomic study, 40 potential ideas were identified in a variety of health and behavioral genomic areas in recently published research (http://diygenomics.pbworks.com). One area that seemed conducive to study was the potential association of the MTHFR gene and vitamin B deficiency. MTHFR polymorphisms may keep vitamin B-9 (folic acid) from being metabolized into its active form, folate. This may lead to the potentially harmful accumulation of homocysteine. There is a strong research-supported association between the principal MTHFR variant (rs1801133) and homocysteine levels.[6] Genotyping data for MTHFR variants are available in 23andMe data. Furthermore, blood tests for homocysteine, vitamin B-12, and vitamin B-9 are readily obtainable, as are over-the-counter vitamin supplement interventions. A testable hypothesis was generated that supplements may be effective in reducing homocysteine levels, particularly for those with a genetic polymorphism. Studying MTHFR and vitamin B deficiency could have an important public health benefit since approximately half of the US population is estimated to have one or more MTHFR polymorphisms. The distribution of genotypes in the US for rs1801133 is 49% CC (homozygous normal), 40% CT (heterozygous), and 11% TT (homozygous risk).[7] In addition, vitamin B-12 deficiency is a common nutritional deficiency in both the US and the developing world,[8] particularly for the elderly and vegetarians (approximately 3% of the US population).[9] 2. Conduct a literature review to validate the selected study area Numerous observational and prospective studies have found correlations between elevated plasma homocysteine levels and cardiovascular disease, renal disease, depression, anxiety, Alzheimer?s disease, and colorectal cancer.[10][11][12][13][14] The majority of published literature relates to cardiovascular disease. A meta-analysis of 30 prospective and retrospective studies (involving a total of 5,073 ischemic heart disease (IHD) events and 1,113 stroke events) showed that a 25% lower homocysteine level was independently associated with an 11% lower risk of coronary heart disease and a 19% lower risk of stroke.[10] Despite this, the causal relationship between elevated homocysteine and cardiovascular outcomes has not been conclusively proven. A large (n=12,064), recently published (June 2010), prospective, randomized study of patients with a prior myocardial infarction provided either folic acid or vitamin B-12 supplementation compared to placebo. The authors tracked coronary events over an average of 6.7 years. They found an average reduction of 28% in plasma homocysteine levels, but no difference between the vitamin group and placebo group in the occurrence of coronary events or death.[15] However a prospective, randomized study of the impact of homocysteine levels on the progression of atherosclerosis showed that folic acid supplementation led to reduced homocysteine levels and a regression in carotid intima-media thickness (CIMT) compared to an increase in CIMT for the placebo group.[16] Although more research is needed, there appears to be adequate evidence that low homocysteine levels are desirable, and may reduce risk for a number of conditions. 3. Analyze the underlying biological pathway and mechanism The MTHFR pathway and homocysteine metabolism are the underlying biological mechanisms in this study. There are a number of ways in which genetic variation and intervention may impact homocysteine metabolism. Homocysteine is a naturally-occurring amino acid in the blood which is broken down (metabolized) through three interconnected pathways: the folate cycle, methionine cycle, and transsulfuration pathway (Figure 1).[17] A detailed explanation of homocysteine metabolism is presented in the Supplementary Material. The pathways are fairly complex and involve two other enzymes in addition to MTHFR. It is possible that different interventions could impact overall homocysteine metabolism in different ways. In Figure 1, the red boxes show the different places where the first intervention (the inactive form of B-9, further described below) may impact the pathway; the green box shows where the second intervention (the active form of B-9) may impact the pathway. Figure 1: Homocysteine metabolism. 4. Review related studies and clinical trials Several clinical trials have been conducted to investigate the ability of interventions to lower homocysteine levels. A detailed review of nine studies was conducted and is presented in the Supplementary Material. The average overall result was to lower homocysteine by 23%. Two studies[18][19] specifically compared folic acid with the active form of folate, 5-MTHF (5-methyltetrahydrofolate). Both found that the active formulation was most effective in reducing homocysteine levels (Akoglu 37% versus 24%;[18] Lamers 19% versus 12%[19]). The existing clinical trials suggest that several factors may influence baseline homocysteine levels, in particular, age, health status, and genotype. Individuals who were older (especially over 50), had just experienced a major health disruption, or had one or more polymorphisms in the main MTHFR variant rs1801133, were more likely to have higher baseline homocysteine levels than those that did not (Supplementary Material: Figure 2). Further, the reduction proportion from the baseline level was greater for those individuals with higher initial levels of homocysteine. 5. Design the study protocol The required genomic and phenotypic data were identified. Approximately 20 variants have been linked with homocysteine in genome-wide association studies.[20] MTHFR 677C>T (rs1801133) was selected as the variant with the strongest association to mild enzyme deficiency, and MTHFR 1298A>C (rs1801131) as the leading secondary variant.[6] The corresponding phenotypic measures selected were blood tests for homocysteine, vitamin B-12, and folate (vitamin B-9). The type and timing of interventions were determined based on published literature. The background research on the MTHFR mechanism suggests that individuals with one or more polymorphisms may not be able to metabolize folic acid (the inactive form of B-9) into its active form (tetrahydrofolate or folate), as efficiently as individuals without a polymorphism. Therefore, the first intervention selected was administration of the inactive form of B-9, which is commonly present in over the counter B vitamin products such as Centrum multivitamins. The second intervention involved administration of the active form of folate (L-methylfolate), and the third was administration of the inactive and active forms together (also being tested by a current clinical trial).[21] The supplement contents were as follows: the Centrum multivitamin contained 2 mg of pyridoxine hydrochloride (B-6), 400 mcg of folic acid (B-9), and 6 mcg of methylcobalamin (B-12); the Life Extension Foundation L-methylfolate contained 1,000 mcg of L-methylfolate. The interventions were to be taken on a daily basis, at the same time of day, with food. For this pilot phase of the study, the authors opted to use a crossover study design. Each individual tried each intervention, in sequence, essentially serving as his or her own control. While other homocysteine clinical trials typically had at least four-week periods for testing interventions, two representative trials confirmed that most of the observed effect occurred within the first two weeks.[18][19] Therefore in the pilot study, two-week minimum intervention periods were selected with a two-week washout period at the beginning. Participant recruitment was accomplished by talking about the study in public speaking engagements and targeting special interest groups such as the DIYbio, Quantified Self, Health 2.0, Singularity University, futurist, and life extension communities, particularly 23andMe clients. Some potential participants were motivated to sign up for 23andMe in order to participate in citizen science genomic studies. Many potential participants were interested, but did not join the study for a variety of reasons. The biggest barrier was the self-supported cost of blood tests and supplement interventions ($291). In a full-scale launch, other strategies will be necessary to target a more representative segment of the population. 6. Test the study design in a small non-statistically significant pilot To test the study design, a small non-statistically significant pilot study was conducted in three phases: execution, results collection, and results analysis. The type of analysis that could be conducted on data results is presented here, realizing that the pilot cohort sample size (n=7) is not statistically significant. Seven healthy men and women, ages 26-47, who had not taken any vitamin supplements for two weeks or more and met other usual study exclusion criteria, were enrolled in the study. The study was conducted from June to December 2010. Three participants cycled through the study at nearly exact two-week intervals. Three participants went through the study in two- to three-week periods on average, and one participant specifically tested three-week intervals. Six participants ordered blood tests from the Life Extension Foundation as they offered the lowest cost, and lab work orders were fulfilled at local LabCorp (standardized testing) facilities in the US. The remaining participant had homocysteine levels tested at a Japanese medical facility in Tokyo. The L-methylfolate supplement was mail-ordered by the group from the Life Extension Foundation. The Centrum multivitamin was purchased individually at local drug stores. All seven of the study participants collaborated in the study design or an active review of the protocol. The study relied on self-reporting that the supplement protocol was followed. Participants tried to avoid unusual variance in nutrition, exercise, stress levels, sleep, and other behaviors. Participants looked up their genotype data for the relevant MTHFR variants in their 23andMe data files (genotyping is assumed to be accurate[22]), and recorded them in the study?s public wiki (http://diygenomics.pbworks.com/MTHFR_Results). Blood test measurements from LabCorp PDFs or other reports were entered similarly in the public wiki. All participants were interested in full transparency and public accessibility of their genotypic and phenotypic study results, and allowed their names to be associated with the study. Participants were enumerated as Citizen 1, 2, etc with their initials. Genotype results: Table 1 lists the pilot study participants and their genotype data for the two reviewed variants. For the main associated variant, rs1801133, three participants are homozygous normal (GG) and four are heterozygous (AG). Two of the heterozygous participants are also vegetarians/vegans which further increases their potential risk of vitamin B deficiency. For the secondary variant, rs1801131, two participants are homozygous normal (TT), four are heterozygous (GT), and one is homozygous for the polymorphism (GG). The table then includes maternal and paternal haplotype group information from 23andMe and demographic information regarding participant ethnicity, gender, age, and vegetarian status. 23andMe?s genotype reporting method (all genotypes are listed as their forward strand values) means that sometimes their genotyping values need to be mapped to other conventions for interpretation. Commonly used resources for obtaining major/minor allele mappings indicate C/T as the major/minor alleles for rs1801133, and A/C for rs1801131 (dbSNP;[23] SNPedia;[24] HuGE Navigator[7]). The mapping of the alleles from the standard resources to 23andMe would be that rs1801133 C/T is G/A in 23andMe data, and rs1801131 A/C is T/G in 23andMe data (C maps to G and vice versa; A maps to T and vice versa). The mapping was confirmed by comparing deCODEme, Navigenics, and 23andMe data files for the same individuals, and by reviewing genotype prevalence across multiple 23andMe files. Table 1: Genotype results and demographic profiles. Phenotype results: Figure 2 and Table 2 illustrate how homocysteine levels shifted during the pilot study. Table 3 contains the blood test data for vitamin B-12. At baseline, homocysteine levels ranged from 6.4 ? 14.1 ?mol/L. The cohort mean was 10.4 (SD (standard deviation) 3.03), and was higher for vegan/vegetarian individuals with a polymorphism in rs1801133 (12.8 versus 9.5). After the first intervention (Centrum multivitamin), homocysteine went down for six individuals and up for one individual, and had a tighter range (5.7-10.6; mean 8.8; SD 1.50). After the second intervention (L-methylfolate), homocysteine was higher for five individuals, including the four with a polymorphism, and lower for two (mean 10.3; SD 2.77). For the four individuals that included a plasma folate test, levels were at or above the high point of the test reference range (19.9 mg/mL) (Supplementary Material ? Table 3) after the second intervention. After the third intervention (Centrum multivitamin + L-methylfolate), for three of the four participants who tried it, homocysteine was higher than with L-methylfolate alone. In the final step, five individuals completed an ending washout blood test, with three participants, including two with a polymorphism, having lower homocysteine levels than after the third intervention. The fourth participant had slightly higher homocysteine, and the fifth participant had markedly higher homocysteine as compared with the last intervention tried, the L-methylfolate. For three out of four participants that included the vitamin B-12 test (Table 3), B-12 levels went up an average of 17.5% after the first intervention, and one participant?s went down 17%. B-12 movement then generally progressed flat or with a slight increase for the duration of the study. An analysis of the test data results was performed to calculate the percent declines for each period from baseline and for each period relative to the prior period (smoothing was employed for one missing value). There was a 19% average decline in homocysteine for the best solution in any period versus the baseline (Table 2) and a 21% average decline in homocysteine for the best solution in any period versus the prior period. There was not a significant difference between homozygous normal individuals (GG) for the main variant rs1801133 (18% average reduction) and heterozygous individuals (AG) (19% reduction), but the two vegan/vegetarian heterozygous individuals experienced a 28% average reduction. In a larger study that investigated genotype polymorphisms, a difference was found in having greater reduction in heterozygous subjects (12% versus 9%).[25] The secondary variant, rs1801131, did not seem to have an impact, either in isolation or when considered together with rs1801133. Figure 2: Participant homocysteine levels at study intervals. Table 2: Participant homocysteine levels (?mol/L) and statistics at study intervals. Table 3: Participant vitamin B-12 levels (pg/mL) at study intervals. Discussion The overall result seen in this was a 19% average reduction in homocysteine levels. While not statistically significant, this is consistent with the 23% average reduction achieved in reported clinical trials. While a homocysteine range of 0.0-15.0 ?mol/L is considered clinically normal, many scientists contend that lower levels are preferable. Suggested preferred levels are less than 11.4 ?mol/L for men and less than 10.4 ?mol/L for women in one paper cited.[26] According to these measures, four out of the seven pilot participants had high baseline homocysteine levels which they were able to meaningfully reduce with supplement interventions. The best intervention for five out of seven individuals was the regular B vitamin as opposed to the active form of B-9 (folate). The active form of B-9 worked better for one individual. The remaining individual, having a homozygous minor variant form of rs1801133, did not have high initial homocysteine and found that the active form of B-9 was better than the regular B vitamin, but that no intervention was best. This suggests that targeted solutions may be optimum for groups of individuals with certain profiles. The biggest question was why the blood test values for homocysteine increased in five individuals (three with high baseline homocysteine; four with a polymorphism) after B-9 when other clinical trials found the active form of B-9 to be the superior intervention for lowering homocysteine. Participant behavior was generally consistent, and the reproducibility of testing results (within-person, between-person, and in labs) was also confirmed (Supplementary Material: Variability in homocysteine test results). Variation could have been introduced by a number of factors including the natural variability in homocysteine levels, variability in the active ingredient amounts in the intervention supplements, carryover effects between interventions (also evidenced by the lack of homocysteine levels returning to baseline levels in the final washout cases), or other complexities related to the homocysteine pathway. This pilot study represents an example of how genomic-phenotypic-outcome research can be conducted in the era of personalized genetic data availability. It also illustrates the potential importance of including genomics as a data element in preventive medicine research, and illustrates the potential of using motivated individuals in citizen science genomic studies. Several participants also indicated the value of their experience and how it translated into post-study behavioral changes (Supplementary Material: Personal statements from study participants). 7. Identify the next steps for a full-scale launch of the study There are a number of steps required for a full-scale cohort launch including implementing an independent ethical review and informed consent process, adjusting the study protocol, forming strategies for study financing and representative population targeting, and creating a data collection and analysis platform: Independent Ethical Review and Informed Consent Citizen science genomics is human subjects research and as such, should have independent ethical review and oversight. There are at least two independent review boards in the US which have indicated their willingness to discuss the potential review of citizen science studies: IRC, Independent Review Consulting, Inc., in Corte Madera, CA (http://www.irb-irc.com) and WIRB, the Western Institutional Review Board, in Olympia, WA (http://www.wirb.com). A related model of consumer genomic research conducted by 23andMe[27] brought up a number of ethical issues,[28] and ultimately IRC reviewed their study. As citizen science models develop, oversight models could evolve to include citizen ethicists, citizen review boards, health advisors (analogous to financial advisors), and insurance mechanisms for personal health experimentation communities. Informed consent would be an obviously required process to include in any full-scale human subjects research study. Protocol Adjustment The pilot study confirmed that the central point for investigation in a full-size cohort is whether interventions can be optimized according to the genotype-phenotype profiles of individuals. The pilot study also suggested that a regular B vitamin may be most effective in lowering homocysteine in individuals with high baseline homocysteine levels, especially in the presence of one or more rs1801133 polymorphisms. A number of structural changes could be made to improve scientific rigor in a broader launch, including participant blinding, inclusion of a placebo arm, and standardized monitoring, testing, and interventions. Strategies for Funding and Representative Population Targeting To date, citizen science genomics has relied on the study recruitment pool being the limited number of individuals (approximately 100,000) who have subscribed to personal genotyping services. These individuals may not be representative of the population at large; the literature characterizes direct-to-consumer genomic customers as early adopters and self-driven information seekers.[29][30] For widespread public health studies, it will be necessary to target a broad diversity of participants across multiple dimensions including information-seeking and action-taking propensity, ethnicity, and socioeconomic background. To accomplish this, traditional recruitment techniques could be used together with new patient-centered social media strategies. Conclusion This paper presents citizen science genomics, a research model contemplated for large-scale execution of preventive medicine research in crowdsourced cohorts. The model integrates personal genomic data with physical biomarker data to study the impact of various interventions on a predefined endpoint. Citizen science genomics could allow both traditional researchers and citizen scientists to access crowdsourced subjects who are ready to engage in research studies. Citizen scientists could be important resources as they increasingly have access to their health information, may be willing to contribute their data to various studies, have the interest and motivation to investigate conditions of personal relevance, and can leverage crowdsourced labor for data collection, monitoring, synthesis, and analysis, and new tool development. Preventive medicine is a key public health challenge in the coming decades. New models like citizen science genomics are needed to answer important questions. Dropping prices and new technologies for collecting data regarding microbiomes, proteomics, imaging, personal tracking, and other information streams will increase the feasibility of this approach. Preventive medicine has the potential to take on new relevance and meaning through the use of citizen science genomic studies, as crowdsourced participants establish baseline and ongoing longitudinal measures for wellness, health maintenance, and customized intervention. References 1. Goetz T. Sergey Brin?s search for a Parkinson?s cure. Wired. June 22, 2010. Available at: http://www.wired.com/magazine/2010/06/ff_sergeys_search/all/1. Accessed September 20, 2010. ? 2. Pollack A. Consumers slow to embrace the age of genomics. New York Times. March 19, 2010. Available at: http://www.nytimes.com/2010/03/20/business/20consumergene.html. Accessed September 20, 2010. ? 3. Hood L. Systems medicine, transformational technologies and the emergence of proactive P4 medicine. Paper presented at: Personalized Medicine World Conference; January 19-20, 2010; Mountain View, CA. Available at: http://pmwc2010.com/program.php. Accessed September 20, 2010. ? 4. Enriquez J. As the future catches you. Paper presented at: 2nd Annual Consumer Genetics Conference; June 2-4, 2010; Boston, MA. Available at: http://www.consumergeneticsshow.com/uploads/2010_Early_Schedule.pdf.pdf. Accessed September 20, 2010. ? 5. Hartwell L. The promise and progress of personalized medicine. Paperresented at the Sandra Day O?Connor College of Law Personalized Medicine Conference; March 8-9, 2010; Scottsdale, AZ. Available at: http://online.law.asu.edu/events/Personalized_Medicine. Accessed September 20, 2010. ? 6. Sibani S, Christensen B, O?Ferrall E, et al. Characterization of six novel mutations in the methylenetetrahydrofolate reductase (MTHFR) gene in patients with homocystinuria. Hum Mutat. 2000;15(3):280-7. ? 7. Yu W, Yesupriya A, Chang M, et al. Genotype Prevalence Catalog. HuGE Navigator. Available at: http://www.hugenavigator.net/HuGENavigator/raceDisplay.do?submissionID=57&variationID=57. Accessed September 20, 2010. ? 8. Harvard Health Publications. Vitamin B12 deficiency: vegetarians, elderly may not get enough vitamin B12, says the Harvard Health Letter. Available at: http://www.health.harvard.edu/press_releases/vitamin_b12_deficiency. Accessed November 29, 2010. ? 9. Vegetarian Times. Vegetarian Times Study Shows 7.3 Million Americans Are Vegetarians and an additional 22.8 Million Follow a Vegetarian-Inclined Diet (Data collected by the Harris Interactive Service Bureau; data analysis performed by RRC Associates Colorado). 2008. Available at: http://www.vegetariantimes.com/features/archive_of_editorial/667. Accessed November 29, 2010. ? 10. Homocysteine Studies Collaboration. Homocysteine and risk of ischemic heart disease and stroke: a meta-analysis. JAMA. 2002;288(16):2015-2022. ? 11. Stanger O, Fowler B, Piertzik K, et al. Homocysteine, folate and vitamin B12 in neuropsychiatric diseases: review and treatment recommendations. Expert Rev Neurother. 2009 Sep;9(9):1393-412. ? 12. Williams K, Schalinske K. Homocysteine metabolism and its relation to health and disease. Biofactors. 2010 Jan;36(1):19-24. ? 13. Hooshmand B, Solomon A, K?reholt I, et al. Homocysteine and holotranscobalamin and the risk of Alzheimer disease: a longitudinal study. Neurology. 2010 Oct 19;75(16):1408-14. ? 14. Zhu Q, Jin Z, Yuan Y, et al. Impact of MTHFR gene C677T polymorphism on Bcl-2 gene methylation and protein expression in colorectal cancer. Scand J Gastroenterol. 2010 Dec 6. ? 15. Study of the Effectiveness of Additional Reductions in Cholesterol and Homocysteine (SEARCH) Collaborative Group, Armitage JM, Bowman L, et al. Effects of homocysteine lowering with folic acid plus vitamin B12 vs. placebo on mortality and major morbidity in myocardial infarction survivors: a randomized trial. JAMA. 2010 Jun 23;303(24):2486-94. ? 16. Ntaios G, Savopoulos C, Karamitsos D, et al. The effect of folic acid supplementation on carotid intima-media thickness in patients with cardiovascular risk: a randomized, placebo-controlled trial. Int J Cardiol. 2010 Aug 6;143(1):16-9. ? 17. Scott J, Weir D. Folic acid, homocysteine and one-carbon metabolism: a review of the essential biochemistry. J Cardiovasc Risk. 1998;5(4): 223-7. ? 18. Akoglu B, Schrott M, Bolouri H, et al. The folic acid metabolite L-5-methyltetrahydrofolate effectively reduces total serum homocysteine level in orthotopic liver transplant recipients: a double-blind placebo-controlled study. Eur J Clin Nutr. 2008 Jun;62(6):796-801. Page 798, Table 3. ? 19. Lamers Y, Prinz-Langenohl R, Moser R, et al. Supplementation with [6S]-5-methyltetrahydrofolate or folic acid equally reduces plasma total homocysteine concentrations in healthy women. Am J Clin Nutr. 2004 Mar;79(3):473-8. ? 20. Par? G, Chasman DI, Parker AN, et al. Novel associations of CPS1, MUT, NOX4, and DPEP1 with plasma homocysteine in a healthy population: a genome-wide evaluation of 13,974 participants in the Women?s Genome Health Study. Circ Cardiovasc Genet. 2009 Apr;2(2):142-50. ? 21. Flugelman, M. Examining B12 Deficiency Associated With C677T Mutation on MTHFR Gene in Terms of Commonness and Endothelial Function. Clinical trial in progress: NCT00730574. Available at: http://clinicaltrials.gov/ct2/show/NCT00730574. Accessed September 20, 2010. ? 22. Ng PC, Murray SS, Levy S, et al. An agenda for personalized medicine. Nature. 2009;461:724 ?726. ? 23. National Center for Biotechnology Information (NCBI). dbSNP. Available at: http://www.ncbi.nlm.nih.gov/sites/entrez?db=snp&cmd=search&term=rs1801133 and http://www.ncbi.nlm.nih.gov/sites/entrez?db=snp&cmd=search&term=rs1801131. Accessed September 20, 2010. ? 24. Cariaso M. SNPedia. Available at: http://www.snpedia.com/index.php/Rs1801133 and http://www.snpedia.com/index.php/Rs1801131. Accessed September 20, 2010. ? 25. Ashfield-Watt PA, Pullin CH, Whiting JM, et al. Methylenetetrahydrofolate reductase 677C?>T genotype modulates homocysteine responses to a folate-rich diet or a low-dose folic acid supplement: a randomized controlled trial. Am J Clin Nutr. 2002 Jul;76(1):180-6. ? 26. Selhub J, Jacques PF, Rosenberg IH, et al. Serum total homocysteine concentrations in the third National Health and Nutrition Examination Survey (1991-1994): population reference ranges and contribution of vitamin status to high serum concentrations. Ann Intern Med. 1999 Sep 7;131(5):331-9. ? 27. Eriksson N, Macpherson JM, Tung JY, et al. Web-based, participant-driven studies yield novel genetic associations for common traits. PLoS Genet. 2010 Jun 24;6(6):e1000993.? 28. Gibson G, Copenhaver GP. Consent and internet-enabled human genomics. PLoS Genet. 2010 Jun 24;6(6):e1000965.? 29. McGuire AL, Diaz CM, Wang T, et al. Social networkers? attitudes toward direct-to-consumer personal genome testing. Am J Bioeth. 2009;9:3-10.? 30. McGowan ML, Fishman JR, Lambrix MA. Personal genomics and individual identities: motivations and moral imperatives of early users. New Genetics and Society. 2010 Sep;29(3):261-290.? Acknowledgments We would like to acknowledge Takashi Kido and William Reinhardt for sharing their genotypic and phenotypic data, and many individuals who shared their genetic data for research purposes including David Orban, Geoffrey Shmigelsky, Eri Gentry, Todd Huffman, Fadi Bishara, Richard Leis, Jr., Mark Even Jensen, Misha Angrist, and several parties whom wish to remain anonymous. We would like to acknowledge Lyn Powell and Lucymarie Mantese for their advisory contribution and study support. Copyright: ? 2010 Melanie Swan, Kristina Hathaway, Chris Hogg, Raymond McCauley, and Aaron Vollrath. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the author(s), with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings. """ - Bryan http://heybryan.org/ 1 512 203 0507 From spike66 at att.net Sat Jan 1 15:43:55 2011 From: spike66 at att.net (spike) Date: Sat, 1 Jan 2011 07:43:55 -0800 Subject: [ExI] Von Neumann probes for what? In-Reply-To: <4D1F0ED6.2050205@aleph.se> References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <4D1F0ED6.2050205@aleph.se> Message-ID: <001201cba9ca$b2c6ebe0$1854c3a0$@att.net> ... On Behalf Of Anders Sandberg >... The real problem is when you get conflicts between expanding replicator clouds. I haven't finished my work on this, but it looks like there are endless war solutions where resources get used up but the conflict never ends...--Anders Sandberg Ja Anders thanks for pointing out this. A few months ago I came to the realization that the formation of an MBrain does not put an end to war. It changes the form of war: no actual property destruction, projectiles or death, none of that unpleasantness, but war continues. That caused me to rethink the notion of our ethical obligation to expand like crazy throughout the galaxy as soon as we can. Reasoning: to explain the apparent silence of the cosmos, one possibilty is that humans really are the very first tech enabled intelligence in the galaxy. We really are alone. If so, other intelligent lifeforms might be on the way towards evolving, but are a few million years away from where we are now. Conflict between the civilizations is inevitable and unpredictable. So if we expand out to where they are and MBrain them preemptively, we may defuse an inevitable war. On the other hand, we know for sure that humans are a warlike species, and we can imagine an intelligent species that is not warlike (it *is* hard to do), and really has nothing to kill and die for and no religion too etc. But that isn't human. So I end up with a contradiction: it is our moral obligation to expand into the galaxy to prevent war, but it will result in war with ourselves so it is our moral obligation to not. spike From algaenymph at gmail.com Sat Jan 1 15:45:20 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Sat, 01 Jan 2011 07:45:20 -0800 Subject: [ExI] Is there anything special about Boston? Message-ID: <4D1F4C10.3000208@gmail.com> I'm thinking of writing a story about a transhumanist in the Boston area. However, I may decide on another city if there isn't something of interest to transhumanists that's *only* in Boston. All that comes to mind right now are Kurzwiel, MIT, and the W3C but I don't know how important to transhumanism they are. So what's in the area that'd be a big draw? From eugen at leitl.org Sat Jan 1 16:16:16 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 17:16:16 +0100 Subject: [ExI] Von Neumann probes for what? In-Reply-To: References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <4D1F0ED6.2050205@aleph.se> Message-ID: <20110101161616.GZ16518@leitl.org> On Sat, Jan 01, 2011 at 11:40:32AM +0000, BillK wrote: > If the intelligence is processing in a substrate a million times Who built that substrate? What happens when it gets crowded? > faster than humans, that effectively 'freezes' the real universe. If > they live on the edge of a black hole, then it actually does freeze > the real universe so far as they are concerned. > Sending out probes that never seem to move away is a pointless > endeavour from their POV. How do mighty oak forests grow? Acorn by acorn. > That's likely why the galaxy hasn't already been swamped with probes > many times over. > (Or, possibly, no culture has ever survived its singularity). You're very good explaining at why we're still in Africa. Or why the first autocatalytic sets bothered with autocatalysis at all. Dreary business that, let's rather contemplate entropy. Another one: it pretty much looks as if transhumanists don't produce a lot of children. Whereas http://postbiota.org/pipermail/tt/2010-December/008311.html others do. Assuming memes are sticky, what's the face of a resulting culture? So you've got a population, some small fraction of it is expansive, and the overwhelming majority is not. So guess who's going to drop by for a visit? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sjatkins at mac.com Sat Jan 1 16:29:10 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 01 Jan 2011 08:29:10 -0800 Subject: [ExI] Second-Life dissociation/simulation as an improvement over reality. In-Reply-To: <035c01cba975$bfc297e0$3f47c7a0$@net> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <24467B7D-A277-4E1A-87DF-9981AB535CDF@bellsouth.net> <028401cba949$fdabe9c0$f903bd40$@net> <035c01cba975$bfc297e0$3f47c7a0$@net> Message-ID: <46940266-E916-4990-ABA1-90754F459EF0@mac.com> I will ask a few and see if they are interested and can suggest others who might be also. - s On Dec 31, 2010, at 9:35 PM, Amara D. Angelica wrote: > Samantha, > > Wow! I'd like to interview a few extreme SL denizens who have experienced this? Any tips on how to reach them? > > - Amara > > AA: On a related note, is there any evidence that long-time users of Second Life experience such dissociation? And is it possible or likely that as simulation technology improves and becomes widely available, mass dissociation or psychosis might occur? The effect would be increased by using high-res VR with full immersion and at least 180 degrees to avoid peripheral vision artifacts (humans have about 200 degrees vision) and ultra-high-resolution such as http://www.sensics.com/products/AugmentedReality.php (4200x2400 pixels). Also see http://cb.nowan.net/blog/state-of-vr/state-of-vr-displays/. > > SA: This is a pretty well known phenomenon in SL. Some describe it as what make a true digital person - the experiencing of avatar as self and even physical self as alternate embodiment of avatar. I used to get strange effects like the physical world looking more unreal to me than the virtual world. But that seems to have been a temporary adjustment period when I was spending much more time in SL. Many report distinct person phenomenon of two persons, one virtual, sharing the same brain. Personality creation, living within the creation, is something all humans do growing up (or more ofter in some cases). It is not surprising that we sometimes spawn off new "selves" in virtual worlds as they improve. It is experienced as much more than 'mere' make-believe. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtomek at ceti.pl Sat Jan 1 16:31:10 2011 From: rtomek at ceti.pl (Tomasz Rola) Date: Sat, 1 Jan 2011 17:31:10 +0100 (CET) Subject: [ExI] Is there anything special about Boston? In-Reply-To: <4D1F4C10.3000208@gmail.com> References: <4D1F4C10.3000208@gmail.com> Message-ID: On Sat, 1 Jan 2011, AlgaeNymph wrote: > I'm thinking of writing a story about a transhumanist in the Boston area. > However, I may decide on another city if there isn't something of interest to > transhumanists that's *only* in Boston. All that comes to mind right now are > Kurzwiel, MIT, and the W3C but I don't know how important to transhumanism > they are. > > So what's in the area that'd be a big draw? The community of Common Lisp programmers isn't unique to the very area but is quite visible there, compared to other places on the planet. Just my impression. Maybe I see what I am prepared to look for, however. Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From sjatkins at mac.com Sat Jan 1 16:31:27 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 01 Jan 2011 08:31:27 -0800 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> Message-ID: <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> On Dec 31, 2010, at 9:53 PM, John Clark wrote: > On Dec 31, 2010, at 5:28 PM, The Avantguardian wrote: > >> The soul? Is that what you think this is about? > > Yes, that is exactly what I think this is about. You say the copy is perfect but it is nevertheless missing something; leaving aside the obvious illogic of such a thing, what exactly is this secret sauce that the original has that the copy does not? You say it is not information, and you'd better say it is not atoms or you will end up inundated in absurdities, so this mysterious ingredient must be something else entirely and it is of enormous importance too, but for some unknown reason it cannot be explained or even detected by the scientific method. There is already a word in the English language for something like that, but I can't really blame you, I'd feel pretty foolish using The Word That Must Not Be Named too. Why drag soul into this? A perfect copy is not the original. That is what this unending discussion seems to sum up to. OK. Fine. Next. - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtomek at ceti.pl Sat Jan 1 16:35:16 2011 From: rtomek at ceti.pl (Tomasz Rola) Date: Sat, 1 Jan 2011 17:35:16 +0100 (CET) Subject: [ExI] Von Neumann probes for what? In-Reply-To: <20110101161616.GZ16518@leitl.org> References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <4D1F0ED6.2050205@aleph.se> <20110101161616.GZ16518@leitl.org> Message-ID: On Sat, 1 Jan 2011, Eugen Leitl wrote: > So guess who's going to drop by for a visit? Um, "entrepreneurs" looking for a viceroy of Earth position? Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From eugen at leitl.org Sat Jan 1 16:42:11 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 17:42:11 +0100 Subject: [ExI] Singletons In-Reply-To: <4D1F0ECF.2070409@aleph.se> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> Message-ID: <20110101164211.GA16518@leitl.org> On Sat, Jan 01, 2011 at 12:23:59PM +0100, Anders Sandberg wrote: > A singleton doesn't necessarily have to be synchronized. Imagine a set > of local rules that gets replicated, keeping things constrained wherever > they go. The problem that the local rules cannot be static, if the underlying substrate isn't. And if there's life, it's not static. Unless the cop keeps beating you into submission every time you deviate from the rules. And if the rule set evolution is open ended, it is uncomputable. So why having a baton-twirling cop on each block in the first place? Some of them will wind up abusive, or crooked. > I have the same aversion. However, I am open to the possibility that a > civilization without global coordination that really can put its foot > down and say "no!" to some activities will with a high probability be > wiped out by some xrisk or misevolution. I am still not convinced that That will only happen locally. The problem of invasive species is solved that the ecosystem enters a new equilibrium. > this possibility is the truth, but it seems about as likely as the > opposite case. > > I would love to be able to find some good arguments that settle things > one way or another. The problem is that the xrisk category is pretty big > and messy, with unknown unknowns. Let's look at a population of cultures the size of a galaxy. How do you produce an existential risk within a single system that can wipe more than a stellar system? In order to produce larger scale mayhem you need to utilize the resources of a large number of stellar systems concertedly, which requires large scale cooperation of pangalactic EvilDoers(tm). > Military budgets are a few percent of GDP for heavily armed countries, > and maybe equally large for policing. In our bodies the immune system > accounts for ~20% of metabolism if I remember right. The immune system is a good example of an evolved system that is guarding self from non-self, with the energy and false positives tradeoff. The problem of the hypothetical cosmic cop on every block scenario is that in order for it to work it appears to be an irresistible force of nature, to which the ecosystem can only react. It cannot push back. > Singletons doesn't have to be sinister Master Control Programs, they > could be some form of resilient oversight body implementing an If they cannot enforce, they're not there. If they can be attacked, they're not there. In order for it to work it would be the MCP from hell. For all practical purposes, the God Hypervisor. > unchanging constitution. The von Neumann probe infrastructure mentioned > in the other thread could implement a singleton as an interface between > the colonizer/infrastructure construction layer and the "users", > essentially providing them with a DRMed galactic infrastructure. How You know that DRM can't work, and if you're directly accessing the physical layer there's no virtualization. I don't see how anyone would deliberately let themselves become somebody's serf. So you have actually have to dispatch an overwhelming force putting everybody on leg irons. I would know what I would do if someone would attempt that. > perfect they need to be depends on how dangerous failures would be; the > more scary and brittle the situation, the more they would need to > prevent certain things from ever happening, but it could just be that > they act to bias the evolution of a civilization away from certain bad > attractor states like burning cosmic commons. But there do not appear to be any cosmic commons. And before we engage into militant astroecology, we better fix our act at home first. If you live on this planet, you cannot reverse the Holocene extinction. You cannot reduce your footprint to zero. You cannot reduce your population growth. >> I'm sure such a thing would be a dictator's wet dream. > > Yup. A bad singleton is an xrisk on its own. I was wrong. There *is* a way to Blight the universe. It's by letting loose the cosmic cops before anyone ever had time to graduate kindergarten. From eugen at leitl.org Sat Jan 1 16:44:37 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 17:44:37 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> Message-ID: <20110101164437.GB16518@leitl.org> On Sat, Jan 01, 2011 at 08:31:27AM -0800, Samantha Atkins wrote: > Why drag soul into this? A perfect copy is not the original. > That is what this unending discussion seems to sum up to. OK. Fine. Next. A perfect copy is indistinguishable from the original. Location is not a label encoded within the copy, orelse it would be distinguishable. See, it's easy. From eugen at leitl.org Sat Jan 1 16:49:57 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 17:49:57 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <135888.23656.qm@web65615.mail.ac4.yahoo.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> Message-ID: <20110101164957.GC16518@leitl.org> Your quoting is all screwed up, see below. On Fri, Dec 31, 2010 at 02:28:05PM -0800, The Avantguardian wrote: > > > ? > From: John Clark > >To: ExI chat list > >Sent: Fri, December 31, 2010 1:08:17 PM > >Subject: Re: [ExI] simulation as an improvement over reality. > > > > > >On Dec 31, 2010, at 1:22 PM, Damien Broderick wrote: > > > >Dear dog in Himmel! NOBODY HAS EVER DENIED THIS! An exact copy of you MUST > >experience himself as you. That's not the problem. The real issue nobody ever > >seems to answer was posed by Stuart: > >>"you should be?alright with your long lost twin brother showing up, locking you > > > >>in the cellar, > >>and assuming your identity. Or cheating death by brainwashing someone else > into > >>honestly believing they are you." > >>You'd be okay with that, Ben? > > Damien, that is an extremely stupid question and I think you're smart enough to > realize it was a stupid question; if you're not now embarrassed in asking it you > > damn well should be. > > Yes, the copy experiences self and world exactly as you do and is therefore *a* > ?you. No, *you* here and now have no stake (other than empathy or envious > hatred) in that replica consciousness > And so you believe that you can copy everything about you EXCEPT for the very > most important part, The Thing That Must Not Be Mentioned; at least not > mentioned if you don't want to be laughed at by the scientifically minded, the > thing that starts with the letter "s".? > > > The soul? Is that what you think this is about??I am not talking about > metaphysics here. I?am an event that has?a?set of very physical space-time > coordinates. Can?you copy my space-time coordinates? If my copy does not occupy > my position in space and time, it is not me. You can re-enact the battle of > Gettysburg down to the most excruciating detail, but never will your renactment > *be* the battle of Gettysburg. > > ?Stuart LaForge > > > "There is nothing wrong with America that faith, love of freedom, intelligence, > and energy of her citizens cannot cure."- Dwight D. Eisenhower -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Sat Jan 1 17:03:27 2011 From: pharos at gmail.com (BillK) Date: Sat, 1 Jan 2011 17:03:27 +0000 Subject: [ExI] Von Neumann probes for what? In-Reply-To: References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <4D1F0ED6.2050205@aleph.se> Message-ID: On Sat, Jan 1, 2011 at 3:05 PM, Tomasz Rola wrote: > Nono, I think one of us has got this wrong. Isn't it that if they go > inside the horizon, the "outside" universe speeds up? So the probes move > among the stars like fireworks of sort. And for us, they freeze. > > Or maybe you meant a white hole - they live their days in a bubble of fast > time, while the universe can only hopelessly wait in pause for whatever > gets out of this bubble one day. Of course, the pause is from their POV, > we simply live as usually. > Oooohhh. It's complicated. ;) if you like, forget about orbiting around black holes. My proposal that a post-singularity intelligence will have a million times speedup in processing still has the result that sending probes out in effect means that they will live through eons while the probes hardly physically move at all. That is sufficient for the argument. Discussion of time-dilation effects may not progress this discussion much. (But I'm tempted.......) :) (Though it appears that white holes don't exist. They seem to be a theoretical result that nobody has spotted in reality). > > Or that priorities and expectations change a lot after that. For amazonian > native, going for holidays to Hawaii is quite incomprehensible, I guess. > Ditto for buying books and stacking them on the floor because other places > are already taken. And this is just a beginning. > > Agree completely. We are like ants speculating on the motivations of a being that stomped on one ants nest and left another untouched. BillKI From eugen at leitl.org Sat Jan 1 17:05:36 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 18:05:36 +0100 Subject: [ExI] Von Neumann probes for what? In-Reply-To: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> Message-ID: <20110101170536.GD16518@leitl.org> On Fri, Dec 31, 2010 at 02:47:58PM -0800, Samantha Atkins wrote: > What exactly do we expect these probes to do when they reach > a workable planetary system? A certain portion of the local The pioneers set up shop there, self-reproduce, continue to the next system. Then the successor waves arrive, set up shop there, self reproduce, continue to the next system. Iterate, until you're in semi-steady state where each patch of the universe is equivalent to a patch of Earth Amazonas rainforest, only in 3d. The postecosystem continues, until the Joules and atoms run out, at which point The End. Unless the rainforest creates find a trick how to make it go on ticking. There might be none. > resources would be converted into more probes and sending > those out. What the rest do is an interesting question. Take a random km^2 of this planet. What does it do? It depends on whom you ask. The ants, the lizards and the inebriated farmer will be probably not in agreement. > Presumably they are capable of producing a matrix > (computronium or something less exotic) out of local Computronium is no more exotic than the box you're writing this on. It isn't neutronium, or unobtainium, unless it isn't made from classical matter. It's not obvious latter is possible. > resources and instilling it with some part of the knowledge, > abilities of the originating civilization. If they are Well, yeah, Mayflower brought in a number of colonists. > so capable then it does not seem possible that each probe The pioneers have only one fitness function to comply to: propagate as quickly as possible. They're streamlined to do only this task. Successor species waves will be of all colors, and some of them will be smart. Some extremely so. Others will be dumb as dirt. Evolution doesn't have a particular arrow. It just fills the niches. > is or evolves to be relatively unintelligent regarding the > decision as to whether some particular system can or or > should be processed. After all it could unfold enough > computational capacity to consider the question more > deeply before proceeding with the main phase. If the > consideration led to a negative answer then it would > use at most enough resources to send a minimal set of > other probes out and clean up after itself. Take a common E. coli. It is sufficiently complex that we, the smart many big humans don't know how it exactly works. Does E. coli know how it works? Why should it even care, as long as it can? > If the probe only converted local resources into more probes > and no probes set up an outpost of the originators then what > interest of the originators would be served by the program? What interests of your originators would be served by your existance? > If the probes could evolve to ditch all parts of their program > except replication that would be a failure. If the created > outposts could change so much as to be incompatible and even > a serious threat for the rest of the civilization that would > be something to consider before embarking on such a program. > > I can see value in such as a way to ensure that all things > at all like the originating civ don't get wiped out by a > supernova or some other relatively localized catastrophe. > Or perhaps creating a buffer zone. > > Am I missing something? Yes. You're thinking way too much. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From msd001 at gmail.com Sat Jan 1 17:06:22 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 1 Jan 2011 12:06:22 -0500 Subject: [ExI] Von Neumann probes for what? In-Reply-To: <4D1F0ED6.2050205@aleph.se> References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <4D1F0ED6.2050205@aleph.se> Message-ID: On Sat, Jan 1, 2011 at 6:24 AM, Anders Sandberg wrote: > I think a probe infrastructure could be something that just looks like added > value to a civilization. It launches the probes, they spread and set up > waystations that can receive instructions and mindstates, as well as send > back observations. If they want to use the system they can. The key limiters > are whether the cost of the initial probe is high relative to the > civilization GDP and whether the time horizons of *every* entity within it > are so short there is no value in getting a fraction of the galaxy in the > far future. Has anyone considered the possibility that a civilization capable of effectively deploying these probes might already have a level of technology advanced such that there only needs to be one monitoring station in each universe? Something to the effect of a single existential bit in each of a multitude of "real" (to us) universes that are used only to track that each simulation is still running. Maybe the interesting results only manifest after the exhaustion of each universe and a post-mortem examination of the solution state represented by the end of life? (aside from debugging, do we intently watch each step of a protein folding experiment?) Ok, so maybe we don't agree on the universe-as-a-sim viewpoint. I imagine there is still some possibility that our concept of how the universe works is necessarily limited; from an outside-this-box workshop there might be other parameters we simply can't observe. From msd001 at gmail.com Sat Jan 1 17:20:58 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 1 Jan 2011 12:20:58 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <20110101164437.GB16518@leitl.org> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> Message-ID: On Sat, Jan 1, 2011 at 11:44 AM, Eugen Leitl wrote: > On Sat, Jan 01, 2011 at 08:31:27AM -0800, Samantha Atkins wrote: > >> Why drag soul into this? ?A perfect copy is not the original. >> That is what this unending discussion seems to sum up to. ?OK. ?Fine. ?Next. > > A perfect copy is indistinguishable from the original. Location is not > a label encoded within the copy, orelse it would be distinguishable. > > See, it's easy. Sure, a perfect copy... But we can only asymptotically approach a perfect copy. If we can accept that fact then we should discuss what constitutes a "good enough" copy. In most cases I assume good enough is able to fool everyone that I need the copy to fool. So early on, I'll have a copy that's good enough to fool SPAM; then I'll have a copy that's good enough to fool you-all on this discussion list that I'm me despite the fact that I'm a copy of me; then I won't even have to show up at obligatory family get-togethers because the copy will fool everyone except me. Clearly I'm always ME, so the impersonator must necessarily be the damned copy. Of course, the copy will feel the same way about me/us. (after all, we think alike) From sjatkins at mac.com Sat Jan 1 17:32:13 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 01 Jan 2011 09:32:13 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: References: Message-ID: <39571FC5-602E-4648-836A-695CA5BCE34F@mac.com> On Jan 1, 2011, at 2:40 AM, Keith Henson wrote: > On Fri, Dec 31, 2010 at 11:07 PM, Samantha Atkins wrote: >> >> On Dec 31, 2010, at 9:48 AM, Keith Henson wrote: >> >>> On Fri, Dec 31, 2010 at 12:32 AM, Samantha Atkins wrote: >>> >>>> On Dec 30, 2010, at 3:18 PM, Keith Henson wrote: >>>>> >>>>> And 20 years ago, that was the right conclusion. Now we have a path, >>>>> even if it kind of expensive, to 9+km/sec exhaust velocity. That >>>>> means mass ratio 3 rockets to LEO and even better LEO to GEO. >>>> >>>> I must have missed it. Please give details, links, etc. How expensive? How large a payload? What technologies? >>> >>> Context is SBSP, 200 GW of new power per year, one million tons of >>> parts going up per year. That's about 125 tons per hour delivered to >>> GEO. >> >> >>> >>> The SSTO vehicle is an evolution of the Skylon design >>> http://www.astronautix.com/lvs/skylon.htm swapping lox for payload and >>> a sapphire window between the engines with 10-20 bar hydrogen and a >>> deep channel heat absorber behind it. The flow of cold hydrogen keeps >>> the window and the front surface of the heat absorber cool. The >>> absorber is described here: >>> http://www.freepatentsonline.com/4033118.pdf >>> >> >> A Skylon only delivers about 12 tons per trip to LEO. They were designed for no less than 200 launch lifetimes. And they were designed for two launch windows per day equatorial. I don't see how you get from that to 125 tons / hr to LEO, much less GEO. > > I said it was evolved from Skylon. Slight upgrade from 275 t0 300 > tons takeoff. That is incidentally, less than the smallest 747. And > the launch window to a fixed place at GEO from a fixed place on the > earth is *always* open. There would be a takeoff and a landing every > 15-20 minutes. That's trivial compared to LAX or SFO. >> >>> One part is fixed by physics and the Earth's gravity field. The >>> minimum horizontal boost acceleration after getting out of the >>> atmosphere with substantial vertical velocity has to be slightly more >>> than a g to achieve orbit before running into the atmosphere. You >>> want to use the minimum acceleration you can at the highest exhaust >>> velocity you have energy for. This keeps down the laser power, which >>> is huge anyway. >>> >>> This takes 15-20 minutes and only in the last third do you get up to >>> the full 3000 deg K and 9.8 km/sec. The average (for this size >>> vehicle and 6 GW) is 8.5 km/sec, but the first 2 km/sec in air >>> breathing mode has an equivalent exhaust velocity of 10.5 km/sec. So >>> about 1/3 of takeoff mass (300 tons) gets to orbit. The vehicle mass >>> is about 50 tons leaving 50 tons for the LEO to GEO stage. >> >> That is a much much larger craft than a Skylon. Do you have links for it? Note that an Ares V launch (super heavy launcher) can only put about 63 tons into GEO. So this craft capability seems like serious magic to me. > > 25 tons larger in 275. I could send you the spread sheets that > analyzed the performance of a hypothetical vehicle. > > Re it being serious magic, that's what twice the exhaust velocity of > the SSME does. >>> >>> So the payload at GEO per load needs to be 1/4 to 1/3 of 125 tons. >>> Again using laser heated hydrogen 35 tons of a 50 ton second stage >>> will get there. With some care in the design, it can all be used for >>> power satellite construction. >>> >>> The long acceleration means the lasers must track the vehicle over a >>> substantial fraction of the circumference of the earth. >> >> Wait, you are using lasers to provide thrust to this big honking lift vehicle? > > Yes, that's why it takes such a huge amount of laser power. > >> I presume you are aware we have only tested this for very very small vehicles and never to high altitudes. > > I don't think it has been tested at all. But the physics and even the > engineering is utterly straightforward. > >> This is in no way near term tech for a vehicle of this size. > > It's a lot smaller technological jump than Apollo. > >> Or do you intend to user laser propulsion only for the LEO to GEO phase? > > Both. > >> Using the standard 1 MW/kg gives 300 GW for a 300 ton vehicle, 50 GW for a 50 ton vehicle. Lasers are generally 10% power efficient so 10x the output power is needed to drive them. What is the joke? > > 1MW/kg is what you need to boost against 1 g. The trick here is to > get up high burning hydrogen and air with a substantial vertical and > horizontal velocity before the laser takes over powering propulsion. > Then you use a *long* acceleration to reach orbital velocity. See > figure 4 here http://www.theoildrum.com/node/5485 for a typical > trajectory. > > And laser diodes are now 50% efficient with an ongoing development > project projected to reach 85%. This is monochromatic rather than > coherent but the light can be converted to coherent at a loss of 10% > or less. I don't think you can quote laser diode efficiency when talking about these very high powered lasers without talking about the pumping methods, light and heat damage to components and so on. >> >>> Based on >>> Jordin Kare's work, this takes a flotilla of mirrors in GEO. Current >>> space technology is good enough to keep the pointing error down to .7 >>> meters at that distance while tracking the vehicle. The lasers don't >>> need to be on the equator so they can be placed where there is grid >>> power. They need to be 30-40 deg to the east of the lunch point. >>> >> >> Uh huh. What is the max distance you are speaking of? > > Around one 6 the of the circumference 40,000/6, 6,666 km. That amounts to about 0.002 MOA tracking a rocket through atmosphere. If we can do that then we can shoot down any old missile, any time with perfect accuracy. > > For 10 m/sec^2 x 900 sec is 9 km/sec. > > The distance is 1/2 10 900^2, about 4000 km. > >>> There are (I think) only four locations where there is an equatorial >>> launch site with thousands of km of water to the east. The US has one >>> set, China has a better one. >>> >>> The lasers are the big ticket item. At $10/watt, $60 B. >> >> That is the cost for 6 GW. >> >>> The rest, >>> vehicles, mirrors, ground infrastructure, R&D, etc might bring it up >>> to $100 B--which is a fraction of the expected profits per year from >>> selling that many power satellites. >>> >>> I don't expect it to be done by the US. China, maybe. >> >> It don't expect it to be done by anyone in this manner from the above description. I don't see how to make such a vehicle or operate that kind of laser propulsion system at such a scale. > > It's big, but utterly straightforward. This is mostly Dr. Kare's > work, I just proposed using something like Skylon to get it up and the > vehicle back at a reasonable cost. I don't think it is utterly straightforward. As you know, many with a lot more knowledge than I don't think so either. The current record for a small test vehicle climbing an admittedly low power beam is measured in the hundreds of feet. A power beam that strong would bring issues of whether it would propel or melt the nozzles. If the beam got a bit off center then it could be a real danger to the rocket itself which presumably is not of a high melting point alloy such as the nozzles would be. The aiming is by no means trivial. Nor is the amount of power needed by the lasers. How do the orbital mirrors station keep reflecting that intense a power beam? What is the required station keeping and mirror adjustment speed? What kind of lasers do you have in mind for this application. This site, http://www.rp-photonics.com/high_power_lasers.html, doesn't lead me thing multi GW lasers are particularly straightforward especially no for such sustained high precision power levels. The most powerful ground based lasers I could find were anti-missile lasers that seemed to top out at 10 MW or so. These were not atmosphere compensated. How much power will you lose to atmosphere compensation? I understand thus far that atmospheric self-focusing only works in narrow power ranges defined by the type of laser used, atmospheric conditions and amount of atmosphere to be traversed. All of this doesn't lead me to belief this is so straightforward. - samantha From sjatkins at mac.com Sat Jan 1 17:46:07 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 01 Jan 2011 09:46:07 -0800 Subject: [ExI] Whatever happened to morphological freedom? In-Reply-To: <4D1F0EDD.4010001@aleph.se> References: <4D1EAFEF.80603@speakeasy.net> <4D1F0EDD.4010001@aleph.se> Message-ID: On Jan 1, 2011, at 3:24 AM, Anders Sandberg wrote: > On 2011-01-01 05:39, Alan Grimes wrote: >>> From the sound of it, people are ecstatic over the prospect of all human >> choice being obliterated in favor of computronium. >> >> ********************************** >> Being able to chose the skin color of your avatar in VR is NOT >> morphological freedom. >> ********************************** > ... >> So what ever happened to the idea and where can I find the people who >> still support it? > > I guess I *have* to respond to this :-) > > Of course it is still around. It is even cited here and there in bioethics these days. I am working on a Morphological Freedom 2.0 paper with some colleagues. I think it has some real world traction ethically and politically, and might be something we should be pushing into the civil rights agenda. Freedom, morphological or otherwise, includes the freedom to make poor decisions and experience the consequences thereof. It does not include any guarantees of success or ability to survive and thrive choosing any old morphology in any and all prevailing conditions. If, for instance, eventual uploads do outperform non-uploads significantly then the non-uploads cannot cry their morphological freedom is being denied them if there ability to compete and claim an economical niche is greatly reduced. Every pro-upload person I know is big into MF, especially their own freedom to upload in the first place! The notion that uploaders are against this has been exploded over and over again. It is very tiresome continuing to see this utterly bogus claim. It is also annoying when currently there not only is not MF but the freedom to choose to use various drugs for enhancement or otherwise is severely curtailed. - samantha From eugen at leitl.org Sat Jan 1 17:58:28 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 18:58:28 +0100 Subject: [ExI] Meat v. Machine In-Reply-To: <6C259209-0A68-446C-ADDF-2DB683C1AC9D@mac.com> References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <001a01cba914$34363940$9ca2abc0$@att.net> <6C259209-0A68-446C-ADDF-2DB683C1AC9D@mac.com> Message-ID: <20110101175828.GE16518@leitl.org> On Fri, Dec 31, 2010 at 04:19:43PM -0800, Samantha Atkins wrote: > Until human level AGI (about 3 decades out seems to be current > consensus), humans are needed. Given that we need space based Do you see the difference between having a system execute a plan with a turnaround time of 2.5 seconds or one with half an hour? Go buy a Kinect, put in a 2.5 second FIFO delay, and try building something in SL with it. Or just add a 2.5 s FIFO in the driver for mouse/6DOF controller, whatever use you. Now push the FIFO to 10 seconds, 30 seconds, 1 min, 5 min, 10 min, 30 min. See the difference? Now think about what a simple collision avoidance would do. Just push semi-blindly, see the system settle into a nondesaster state. Now think *reflexes*. The dumb spinal cord is on the Moon, 2.5 sec turnaround time away, your brain is here. Everyone seems to think it's realtime fine-motorics, or bust. Not so. Yes, there's a difference between 30 ms and 2500 ms. But there's a much larger difference between 2500 ms and 1800000 ms. That's one hell of a handicap, even without microgravity. > resources before three decades from now we must build out human > support local space/lunar infrastructure. Humans are irrelevant. At least when it comes to space. You want to go places, you have to stop wearing the stupid man suit. > You need a lot of high mass initial equipment to lift from I disagree that you need to launch large (100 ton) packages. I think you can work well with >100 kg packages. With plasma thrusters you can probably deliver one half to one third of LEO payload to Moon surface semi-softly. So a ton to LEO is a useful threshold. > the gravity well in any any case to have a basis to built > from this side of mature nano-assembler seeds which are > at least 5 - 6 decades out. It is a good question what > the minimal amount of lift needed is given the current tech We're well in excess of what we need. It would be nice if prices would come down a bit, but that is not actually relevant. More importantly, you can start working now, as none of the parts rely on particular features of transport system you're going to use 15-20 years from now. > state of the art over time. The amount of mass you need > to lift from earth in inversely proportional to the > sophistication of the technology. But it is today quite substantial. I disagree it is substantial. And the only way to know is to start working *now*, so that in 15-20 years you have all the numbers. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From kanzure at gmail.com Sat Jan 1 17:59:40 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 11:59:40 -0600 Subject: [ExI] Fwd: NewBook: The Ethics of Biomedical Enhancement In-Reply-To: <91.71.07087.F8A6F1D4@cdptpa-omtalb.mail.rr.com> References: <91.71.07087.F8A6F1D4@cdptpa-omtalb.mail.rr.com> Message-ID: ---------- Forwarded message ---------- From: L. Stephen Coles, M.D., Ph.D. Date: Sat, Jan 1, 2011 at 11:55 AM Subject: [GRG] NewBook: The Ethics of Biomedical Enhancement To: Gerontology Research Group Cc: Humanity+ , "Allen E. Buchanan, Ph.D." < allenb at duke.edu> To Members and Friends of the Los Angeles Gerontology Research Group: This sounds like our sort of book that will provide us with the arguments we need to refute our evangelical adversaries, providing that they are even willing to listen to reason. Happy New Year, Steve Coles [image: Beyond Humanity?: The Ethics of Biomedical Enhancement (Uehiro] Beyond Humanity?: The Ethics of Biomedical Enhancement (Uehiro Series in Practical Ethics) *by* Allen E. Buchanan (Oxford University Press, New York;* *January 15, 2011) *Amazon.com* Price: $35.00 *Description:* Biotechnologies already on the horizon will enable us to be smarter, have better memories, be stronger and quicker, have more stamina, live longer, be more resistant to diseases, and enjoy richer emotional lives. To some of us, these prospects are heartening; to others, they are dreadful. In* Beyond Humanity *a leading philosopher offers a powerful and controversial exploration of urgent ethical issues concerning human enhancement. These raise enduring questions about what it is to be human, about individuality, about our relationship to nature, and about what sort of society we should strive to have. Allen Buchanan urges that the debate about enhancement needs to be informed by a proper understanding of evolutionary biology, which has discredited the simplistic conceptions of human nature used by many opponents of enhancement. He argues that there are powerful reasons for us to embark on the enhancement enterprise, and no objections to enhancement that are sufficient to outweigh them. * About the Author:* Allen Buchanan is Professor of Philosophy at Duke University. He is the author of *Human Rights, Legitimacy, and the Use of Force; Justice and Health Care; and Justice, Legitimacy, and Self-Determination. * http://fds.duke.edu/db/aas/Philosophy/allen.buchanan* * L. Stephen Coles, M.D., Ph.D., Co-Founder Los Angeles Gerontology Research Group *URL:* http://www.grg.org *E-mail:* *scoles at grg.org **E-mail: **scoles at ucla.edu* _______________________________________________ GRG mailing list GRG at lists.ucla.edu http://lists.ucla.edu/cgi-bin/mailman/listinfo/grg -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 1 18:01:41 2011 From: spike66 at att.net (spike) Date: Sat, 1 Jan 2011 10:01:41 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: <39571FC5-602E-4648-836A-695CA5BCE34F@mac.com> References: <39571FC5-602E-4648-836A-695CA5BCE34F@mac.com> Message-ID: <002901cba9dd$f1ccf9c0$d566ed40$@att.net> ... On Behalf Of Samantha Atkins ... >...That amounts to about 0.002 MOA tracking a rocket through atmosphere. If we can do that then we can shoot down any old missile, any time with perfect accuracy. - samantha Welll, *perfect* is a bit of a stretch, but the control systems are getting quite sexy these days: http://articles.latimes.com/2010/feb/13/business/la-fi-laser13-2010feb13 Meaningless but fun video of a laser spoiling a drone's whole day: http://www.youtube.com/watch?v=G3zxxogDRIw spike {8-] From sjatkins at mac.com Sat Jan 1 18:03:28 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 01 Jan 2011 10:03:28 -0800 Subject: [ExI] Von Neumann probes for what? In-Reply-To: <20110101170536.GD16518@leitl.org> References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <20110101170536.GD16518@leitl.org> Message-ID: <51E1B34A-5562-484F-BA6C-B74479873D34@mac.com> On Jan 1, 2011, at 9:05 AM, Eugen Leitl wrote: > On Fri, Dec 31, 2010 at 02:47:58PM -0800, Samantha Atkins wrote: > >> What exactly do we expect these probes to do when they reach >> a workable planetary system? A certain portion of the local > > The pioneers set up shop there, self-reproduce, continue to > the next system. Of what does this "setting up shop" consist? Creating things in support of further arrivals perhaps. Perhaps also seeding a local civ. > > Then the successor waves arrive, set up shop there, self > reproduce, continue to the next system. > > Iterate, until you're in semi-steady state where each > patch of the universe is equivalent to a patch of Earth > Amazonas rainforest, only in 3d. What for? Some waves will stay and develop local area. Native successors will arise. But how convergence or divergent from the originating civ is actually seen as a benefit to the originating civ is part of my question. > > The postecosystem continues, until the Joules and atoms > run out, at which point The End. Unless the rainforest > creates find a trick how to make it go on ticking. There > might be none. > >> resources would be converted into more probes and sending >> those out. What the rest do is an interesting question. > > Take a random km^2 of this planet. What does it do? It > depends on whom you ask. The ants, the lizards and the > inebriated farmer will be probably not in agreement. This is not terribly germane to my question. > >> Presumably they are capable of producing a matrix >> (computronium or something less exotic) out of local > > Computronium is no more exotic than the box you're > writing this on. It isn't neutronium, or unobtainium, > unless it isn't made from classical matter. It's not > obvious latter is possible. Sure it is as we have not decided exactly what its character is nor do we no how to produce it. It is a theoretical construct. > >> resources and instilling it with some part of the knowledge, >> abilities of the originating civilization. If they are > > Well, yeah, Mayflower brought in a number of colonists. > >> so capable then it does not seem possible that each probe > > The pioneers have only one fitness function to comply to: > propagate as quickly as possible. They're streamlined to > do only this task. Not so or I don't see why it would be so and the probe have any real capacity to do any good for the originating civ. A mere cosmic yeast mold is not terribly useful to anyone unless you like cosmic yeast. > > Successor species waves will be of all colors, and some > of them will be smart. Some extremely so. Others will be > dumb as dirt. > > Evolution doesn't have a particular arrow. It just fills > the niches. > This in not natural evolution but an engineered expansion. They are not the same thing. >> is or evolves to be relatively unintelligent regarding the >> decision as to whether some particular system can or or >> should be processed. After all it could unfold enough >> computational capacity to consider the question more >> deeply before proceeding with the main phase. If the >> consideration led to a negative answer then it would >> use at most enough resources to send a minimal set of >> other probes out and clean up after itself. > > Take a common E. coli. It is sufficiently complex that we, > the smart many big humans don't know how it exactly works. > > Does E. coli know how it works? Why should it even care, > as long as it can? > This is not addressing my question. >> If the probe only converted local resources into more probes >> and no probes set up an outpost of the originators then what >> interest of the originators would be served by the program? > > What interests of your originators would be served by your > existance? Nor is this. > >> If the probes could evolve to ditch all parts of their program >> except replication that would be a failure. If the created >> outposts could change so much as to be incompatible and even >> a serious threat for the rest of the civilization that would >> be something to consider before embarking on such a program. >> >> I can see value in such as a way to ensure that all things >> at all like the originating civ don't get wiped out by a >> supernova or some other relatively localized catastrophe. >> Or perhaps creating a buffer zone. >> >> Am I missing something? > > Yes. You're thinking way too much. Boo hoo. - s From kanzure at gmail.com Sat Jan 1 18:05:25 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 12:05:25 -0600 Subject: [ExI] Is there anything special about Boston? In-Reply-To: <4D1F4C10.3000208@gmail.com> References: <4D1F4C10.3000208@gmail.com> Message-ID: On Sat, Jan 1, 2011 at 9:45 AM, AlgaeNymph wrote: > So what's in the area that'd be a big draw? In Boston? synthetic biology stuff http://openwetware.org/ http://igem.org/ http://syntheticbiology.org/ http://biobricks.org/ local do-it-yourself biohacking group http://diybio.org/ http://groups.google.com/group/diybio http://groups.google.com/group/diybio-boston http://thesprouts.org/ Ed and his synthetic neurobiology group are especially transhumanist: http://edboyden.org/ http://syntheticneurobiology.org/ fablabs http://fab.cba.mit.edu/ http://en.wikipedia.org/wiki/Fab_lab Lots of other stuff I am forgetting about. - Bryan http://heybryan.org/ 1 512 203 0507 From spike66 at att.net Sat Jan 1 17:54:54 2011 From: spike66 at att.net (spike) Date: Sat, 1 Jan 2011 09:54:54 -0800 Subject: [ExI] sound archive Message-ID: <002501cba9dc$ff2e1320$fd8a3960$@att.net> Today while leaving the restaurant I heard an instantly recognizable sound that I hadn't heard in thirty years or more. It was a very distinctive sound that anyone over about fifty would know: cleenk... flish...flish...flish...... clack. It was the sound of one of those old time metal cigarette lighters, the kind that were swept away by the disposable plastic Bic lighters in the 1970s. Now of course it is rare to even see anyone smoking. Tobacco anyway. But anyone who was around in the 1960s saw plenty of smokers and heard that sound often. The lighters outlived the lungs of the users. This one is a knockoff of the old Rossingnols, but it looks exactly like the original: http://www.dinodirect.com/Deluxe-Metal-Oil-Cigarette-Lighter-Gold/AFFID-11.h tml?DinoDirect OK so I can buy something like this now, and even the fluid for it, thanks to the internet. For perhaps 10 years from about 1990 to 2000 that product would have been extinct, because they weren't available as far as I know, from mainstream stores. So plenty of younger people wouldn't know what that flish flish sound is. I recall in my own misspent youth, there was a local rock station which used to play a game where they would give concert tickets to the first person who could identify a sound, usually something that *anyone* over 60 at the time would get instantly, such as the sound of a model A starting, or the whirring sound an old time refrigerator makes when an icicle from the cold coil grew long enough to touch the circulation fan. Modern refrigerators don't do that, but the old ones did, pre-1940 perhaps. The station had a target audience of 10 to 20 years, and of course we didn't know those sounds, but our grandparents did. They had one of those refrigerators, which one fixed by unplugging for about half an hour, icicle melted, no more whirr for a couple weeks or more. We need to create a sound archive of stuff that we would all know, but the current younger ones wouldn't. A good example is that sound of a dialup modem from about 1994: wheeeee...deeeeblbbkblblblblbllbllblblllbpp... Remember that? How long has it been since you heard it? Is there already a sound archive somewhere? We can google on "Rossignol lighter" but there is no way to google on an unknown sound and have it give back "Rossignol lighter" or dialup modem. spike From sjatkins at mac.com Sat Jan 1 18:14:55 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 01 Jan 2011 10:14:55 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: <20110101175828.GE16518@leitl.org> References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <001a01cba914$34363940$9ca2abc0$@att.net> <6C259209-0A68-446C-ADDF-2DB683C1AC9D@mac.com> <20110101175828.GE16518@leitl.org> Message-ID: On Jan 1, 2011, at 9:58 AM, Eugen Leitl wrote: > On Fri, Dec 31, 2010 at 04:19:43PM -0800, Samantha Atkins wrote: > >> Until human level AGI (about 3 decades out seems to be current >> consensus), humans are needed. Given that we need space based > > Do you see the difference between having a system execute a plan > with a turnaround time of 2.5 seconds or one with half an hour? > > Go buy a Kinect, put in a 2.5 second FIFO delay, and try building > something in SL with it. Or just add a 2.5 s FIFO in the driver > for mouse/6DOF controller, whatever use you. > > Now push the FIFO to 10 seconds, 30 seconds, 1 min, 5 min, 10 min, > 30 min. See the difference? > > Now think about what a simple collision avoidance would > do. Just push semi-blindly, see the system settle into > a nondesaster state. Now think *reflexes*. The dumb > spinal cord is on the Moon, 2.5 sec turnaround time away, > your brain is here. > > Everyone seems to think it's realtime fine-motorics, > or bust. Not so. Yes, there's a difference between 30 ms > and 2500 ms. But there's a much larger difference between > 2500 ms and 1800000 ms. That's one hell of a handicap, > even without microgravity. OK. Design the robotics that can, say, repair the Hubble, do various space walk equivalent missions all with no humans closer than earth. Oh, the systems must be general enough to be used for anything a trained human can do as far as physical capabilities are concerned. All but rudimentary control you mention above is done remotely. Then get back to me. > >> resources before three decades from now we must build out human >> support local space/lunar infrastructure. > > Humans are irrelevant. At least when it comes to space. > You want to go places, you have to stop wearing the > stupid man suit. I just presented an argument why they are not yet irrelevant than you have not countered successfully. The only general intelligence of sufficient power currently around does not yet have the ability to shed its biology. So again, if you need more localized general intelligence rather than at tens or thousands of miles remove then you need humans in space - today. > >> You need a lot of high mass initial equipment to lift from > > I disagree that you need to launch large (100 ton) > packages. I think you can work well with >100 kg > packages. With plasma thrusters you can probably > deliver one half to one third of LEO payload to > Moon surface semi-softly. So a ton to LEO is a > useful threshold. Construction materials? Large focusing antennae for SSP projects? You can either do hundreds or thousands of launches or you can do a relatively few large launches for the acceleration hardened larger components. The latter is cheaper in all ways and gets a larger resource base in play much more quickly. > >> the gravity well in any any case to have a basis to built >> from this side of mature nano-assembler seeds which are >> at least 5 - 6 decades out. It is a good question what >> the minimal amount of lift needed is given the current tech > > We're well in excess of what we need. It would be nice > if prices would come down a bit, but that is not actually > relevant. I don't see why you would claim that. Many projects are not doable given today's launch cost and launch facility limitations. > > More importantly, you can start working now, as none of the > parts rely on particular features of transport system you're > going to use 15-20 years from now. > Which parts for precisely what? >> state of the art over time. The amount of mass you need >> to lift from earth in inversely proportional to the >> sophistication of the technology. But it is today quite substantial. > > I disagree it is substantial. And the only way to know is > to start working *now*, so that in 15-20 years you have all > the numbers. > On what? What do you suggest launching that is off the shelf now and for what purposes? - s From agrimes at speakeasy.net Sat Jan 1 18:10:53 2011 From: agrimes at speakeasy.net (Alan Grimes) Date: Sat, 01 Jan 2011 13:10:53 -0500 Subject: [ExI] Whatever happened to morphological freedom? In-Reply-To: References: <4D1EAFEF.80603@speakeasy.net> <4D1F0EDD.4010001@aleph.se> Message-ID: <4D1F6E2D.8030707@speakeasy.net> Samantha Atkins wrote: > Freedom, morphological or otherwise, includes the freedom to make poor decisions and experience the > consequences thereof. It does not include any guarantees of success or ability to survive and thrive > choosing any old morphology in any and all prevailing conditions. I have the greatest confidence that you will continue to put forth every effort to change the "prevailing conditions" to be absolutely inhospitable to even the most advanced nano-cyborg at the earliest possible date while paying lip-service to morphological freedom the entire time. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sjatkins at mac.com Sat Jan 1 18:21:29 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 01 Jan 2011 10:21:29 -0800 Subject: [ExI] Whatever happened to morphological freedom? In-Reply-To: <4D1F6E2D.8030707@speakeasy.net> References: <4D1EAFEF.80603@speakeasy.net> <4D1F0EDD.4010001@aleph.se> <4D1F6E2D.8030707@speakeasy.net> Message-ID: <9AD6E5A5-C366-4624-9B8F-64C71C8CBE94@mac.com> On Jan 1, 2011, at 10:10 AM, Alan Grimes wrote: > Samantha Atkins wrote: > >> Freedom, morphological or otherwise, includes the freedom to make poor decisions and experience the >> consequences thereof. It does not include any guarantees of success > or ability to survive and thrive >> choosing any old morphology in any and all prevailing conditions. > > I have the greatest confidence that you will continue to put forth every > effort to change the "prevailing conditions" to be absolutely > inhospitable to even the most advanced nano-cyborg at the earliest > possible date while paying lip-service to morphological freedom the > entire time. I will go for whatever works the best for me imho and others are welcome to do whatever they think best as long as they do not attempt to forcibly coerce me. I have better things to do than explicitly tuning the facts of reality to make other choices unworkable even if such micro-managing of reality was remotely in my power or in the power of any being possible in reality. - s From eugen at leitl.org Sat Jan 1 18:24:48 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 19:24:48 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> Message-ID: <20110101182448.GI16518@leitl.org> On Sat, Jan 01, 2011 at 12:20:58PM -0500, Mike Dougherty wrote: > Sure, a perfect copy... But we can only asymptotically approach a > perfect copy. If we can accept that fact then we should discuss what My next argument in this chain is that you don't need an atom-perfect copy for an information-encoding object. The makeup of a particular optical disk doesn't matter if it ships the same software version, is in fact bit-identical. The color of the case of the synchronized computers running the same program installed from the above disk is irrelevant to the program running on it. In fact, the programs can't know the case color is different, unless you supply that information, at which stage the synchronization boundary condition takes hold, and witholds that information from one copy, or otherwise ABENDs due to WHOOP WHOOP STATE BIFURCATION DETECTED WHOOP WHOOP. > constitutes a "good enough" copy. In most cases I assume good enough > is able to fool everyone that I need the copy to fool. So early on, > I'll have a copy that's good enough to fool SPAM; then I'll have a > copy that's good enough to fool you-all on this discussion list that > I'm me despite the fact that I'm a copy of me; then I won't even have > to show up at obligatory family get-togethers because the copy will > fool everyone except me. Clearly I'm always ME, so the impersonator > must necessarily be the damned copy. Of course, the copy will feel > the same way about me/us. (after all, we think alike) An synchronized copy doesn't just think alike, it thinks exactly in unison. So you can't fool anybody else while you're fooling me. The input is the exactly the same, the internal state is exactly the same, the evolution at each clock tick is exactly the same, the output is exactly the same. Exact copies are terribly boring, terribly fragile things. They're known as hot failover in HA circles. Divergence is default. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rtomek at ceti.pl Sat Jan 1 18:24:57 2011 From: rtomek at ceti.pl (Tomasz Rola) Date: Sat, 1 Jan 2011 19:24:57 +0100 (CET) Subject: [ExI] Von Neumann probes for what? In-Reply-To: References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <4D1F0ED6.2050205@aleph.se> Message-ID: On Sat, 1 Jan 2011, BillK wrote: > if you like, forget about orbiting around black holes. My proposal > that a post-singularity intelligence will have a million times speedup > in processing still has the result that sending probes out in effect > means that they will live through eons while the probes hardly > physically move at all. That is sufficient for the argument. Well, if boredom is their problem - they can go into suspend-to-ram or suspend-to-disk mode :-). Or they can learn to meditate. I wonder how they would achieve this, on a substrate million zillion gazillions times faster than a human, etc etc. When I had to make my way through a very boring lecture, I used to meditate a lot. Time passed like glass thrown against the wall. To my fellows who were not inclined towards spiritual enhancements, those hours must have been a torture however. Believe it or not, but forty five minutes of lecture is only about hundred and eighty breaths. Maybe two hundreds, maybe a hundred and fifty. On a good day, a hundred. Something like this. Morale: boredom is not a problem - it can be either avoided or killed. No, reading was not an option. I tried it but the lecturer was too distracting. BTW is there anything forcing them to use relativistic physics? Like, the fact that we don't know it, doesn't mean they do not know either. Ugh, triple negation, is it still English that I use? > (Though it appears that white holes don't exist. They seem to be a > theoretical result that nobody has spotted in reality). Sure. But there is so much discussion here about things purely ethernal ;-) that adding one or two does not make any difference at all. I doubt anybody but you noticed it ;-). Actually, talking about non-existent subjects, like benevolent AIs (and what they could do as leisure activity once they no longer enjoy killing humans), seems to be the only and sufficient reason for this list to exist and for me to subscribe :-). > Agree completely. We are like ants speculating on the motivations of a > being that stomped on one ants nest and left another untouched. Terrific perspective. But we are quite well equipped to discuss and understand other ants, which are our biggest concern, or should be. The bigger beings are indifferent and can be likened to supernovae. We can't stop it. We could have built planetary shield if enough decision makers had a clue - or go Swiss way (shelters for every citizen). Regards Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From spike66 at att.net Sat Jan 1 18:13:01 2011 From: spike66 at att.net (spike) Date: Sat, 1 Jan 2011 10:13:01 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: <20110101175828.GE16518@leitl.org> References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <001a01cba914$34363940$9ca2abc0$@att.net> <6C259209-0A68-446C-ADDF-2DB683C1AC9D@mac.com> <20110101175828.GE16518@leitl.org> Message-ID: <003001cba9df$878bd340$96a379c0$@att.net> ... On Behalf Of Eugen Leitl ... >...Humans are irrelevant. At least when it comes to space. You want to go places, you have to stop wearing the stupid man suit... Eugen ... Excellent! I think it is pretty much right. We apes are waaaay too big, and there is too much of our mass that isn't doing anything useful in space travel, such as flab, skeletal heavy structure and most of our muscle mass. This version works so much better than the commentary I made back in the 90s that caused so many to be squicked, such as all long term space travelers should have their legs removed. Brains are great, genitals good, hands OK, the rest of it, nah, not so much. We might want to slightly modify Gene's version to "...we must take off the stupid meat suits..." spike From eugen at leitl.org Sat Jan 1 19:03:07 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 20:03:07 +0100 Subject: [ExI] Meat v. Machine In-Reply-To: References: <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <001a01cba914$34363940$9ca2abc0$@att.net> <6C259209-0A68-446C-ADDF-2DB683C1AC9D@mac.com> <20110101175828.GE16518@leitl.org> Message-ID: <20110101190307.GO16518@leitl.org> On Sat, Jan 01, 2011 at 10:14:55AM -0800, Samantha Atkins wrote: > OK. Design the robotics that can, say, repair the Hubble, We're not trying to repair the Hubble. We're trying to bootstrap (by the cheapest route) a system which can process regolith and cryotrap volatiles into thin-film PVs and start inching towards self-rep closure of unity and above. (Oh, and by the way, if you think extravehicular activities in suits are anything like you're familiar with, I have bad, bad news for you. People have been deliberately ripping their nails out, so they could work better. There is a very good reason why the R2 thing is being built, and once it has delegated the astronauts to the back seat they will suffer the same fate as military pilots today, which are now grounded with joysticks). I do not know how to build such system, but I would approach it from emergent flock behaviour (e.g. leafcutter ant colony) perspective. You need the ant queen fabrication which does the magic, large PV area to supply it with power, and many redundant robotic platforms for material transport and assembly/disassembly. Since this is UHV and power is (soon) abundant you would probably use things like magnetrons/klystrons, E-beam writing, rapid-prototyping techniques, (mostly) dry sorting and sifting, hydrogen (from cryotrap water electrolysis) reduction, magnetic and electrostatic sorting, melt electrolysis reduction, and so on. These are all largely automatable processes, requiring humans for coordination. Initially, you would want to assign each robot to one human operator, rotated over ground teams in different control centers over time zones. > do various space walk equivalent missions all with no > humans closer than earth. Oh, the systems must be general > enough to be used for anything a trained human can do Not necessary. We're still in bootstrap. > as far as physical capabilities are concerned. > All but rudimentary control you mention above is > done remotely. Then get back to me. Yo, check dis out http://www.golem.de/1012/80216.html (all 6 pages). > > > >> resources before three decades from now we must build out human > >> support local space/lunar infrastructure. > > > > Humans are irrelevant. At least when it comes to space. > > You want to go places, you have to stop wearing the > > stupid man suit. > > I just presented an argument why they are not yet irrelevant They are irrelevant because you will not be getting up any any time soon, and they're double irrelevant as soon as we're talking going beyond the inner solar system. > than you have not countered successfully. The only The proof of the pudding is in the eating. The only convincing argument would be a working facility. You might see this within the next 30 years. Or not. > general intelligence of sufficient power currently > around does not yet have the ability to shed its > biology. So again, if you need more localized > general intelligence rather than at tens or thousands > of miles remove then you need humans in space - today. > > > >> You need a lot of high mass initial equipment to lift from > > > > I disagree that you need to launch large (100 ton) > > packages. I think you can work well with >100 kg > > packages. With plasma thrusters you can probably > > deliver one half to one third of LEO payload to > > Moon surface semi-softly. So a ton to LEO is a > > useful threshold. > > Construction materials? In situ resources is called that for a reason. > Large focusing antennae for SSP projects? Phased-array solid-state beamforming, with the panel backside doubling as antenna. Flat, no movable parts. > You can either do hundreds or thousands of launches > or you can do a relatively few large launches for > the acceleration hardened larger components. The No monkeys, no large components. Struts, trusses, thin-film, everyting modular. > latter is cheaper in all ways and gets a larger > resource base in play much more quickly. Time is not the problem, money is. The boostrap will be a slow-motion thing in the early phases of the exponential, and there will be optimization and improvisation along the way. If it starts 20 years from now, the situation is completely different 20 years after. > > > >> the gravity well in any any case to have a basis to built > >> from this side of mature nano-assembler seeds which are > >> at least 5 - 6 decades out. It is a good question what > >> the minimal amount of lift needed is given the current tech > > > > We're well in excess of what we need. It would be nice > > if prices would come down a bit, but that is not actually > > relevant. > > I don't see why you would claim that. Many projects are not doable given today's launch cost and launch facility limitations. I'm not claiming that current lifters are sufficient for all projects, I'm claiming they're fully sufficient for this particular project. People who think that they can do serius work without in situ resource utilization and bootstrap are welcome to their way of doing things. And this is all we're likely going to get, so we can as well start planning that way. Surprises of the positive kind might or might not materialize. > > > > More importantly, you can start working now, as none of the > > parts rely on particular features of transport system you're > > going to use 15-20 years from now. > > > > Which parts for precisely what? When I order an expensive piece of hardware from Amazon the friendly delivery guy is doing a critical part, but it is not the most complicated part. > On what? What do you suggest launching that is off > the shelf now and for what purposes? I outlined the system required to be ready and tested before it can be fielded. The nature of the launch vehicle doesn't really matter. Whether I send packages by Post or DHL, it's not really relevant. They're exchangeable. The parcel is not. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From agrimes at speakeasy.net Sat Jan 1 18:32:28 2011 From: agrimes at speakeasy.net (Alan Grimes) Date: Sat, 01 Jan 2011 13:32:28 -0500 Subject: [ExI] Whatever happened to morphological freedom? In-Reply-To: <4D1F0EDD.4010001@aleph.se> References: <4D1EAFEF.80603@speakeasy.net> <4D1F0EDD.4010001@aleph.se> Message-ID: <4D1F733C.1020906@speakeasy.net> Anders Sandberg wrote: > I guess I *have* to respond to this :-) > Of course it is still around. It is even cited here and there in > bioethics these days. I am working on a Morphological Freedom 2.0 paper > with some colleagues. And I will start saving up my harshest criticisms for it. ;) I might even unleash the Tortoise and Achilles on it. > I think it has some real world traction ethically > and politically, and might be something we should be pushing into the > civil rights agenda. Only to be revoked a few hours after you figure out how to fabricate computronium. =( > However, I think the issue discussed here on the list is separate. MF is > about rights - what autonomous individuals should be allowed to do. Yep. > But there might be technological possibilities that are so enticing, SPEAK FOR YOURSELF. > or long-term evolutionary or economical pressures that are so strong, that > in the limit people or post-people become morphologically similar > (perhaps with an insignificant minority avoiding it). That would be extremely unfortunate. > This is not an > ethical issue in the usual sense: it could even be the result of > individual, fully informed rational decisions. There might be a loss of > value in diversity (a bit like language loss) or even something deeper, > but it would be a collective level ethical issue rather. Say what? > If the price of bodies is so high that hardly anybody can afford them, > as a negative rights libertarian type I still think that is compatible > with morphological freedom. Absurd. Progress --> things get more plentiful and cheaper. Therefore bodies will always get cheaper and more extravagent bodies will become possible (though never practical). Furthermore, do you really expect me to believe that a M-brain is affordable but a few extra tons of titties are not? =P The only imaginable scenario is if the virtual population is allowed to get north of 10^18th or so, each greedy for resources. Obviously, such a population rate of increase and density are unacceptable. As you illustrate, an acceptable quality of life is not achievable under such circumstances so therefore both a diaspora to the infinite vastness and a sensible throttling back of the birth/duplication rate to maybe doubling only every 20 years or so, would eliminate all practical resource contention. > My positive rights colleagues would argue > that to have real MF we need a society that can support buying bodies > somehow (and within some limits; this is what we are thinking about in > our paper). Why wouldn't such a society exist naturally? I started rambling about uploading in the normally quiet channel #neuroscience on irc.freenode.net, and the person who responded was like "Say what???" =P Later he said: (00:29:07) cads: I think that crowd want their efforts to produce an unrealistic level of ego aggrandizement (01:00:36) DevilInside: transhumanists tend to be somewhat insane (01:00:48) DevilInside: i guess i would be a moderate, sane one (01:01:16) DevilInside: also, how often have absurd theories about the future of scientific and technological progress ever come close to reality all that much? (01:03:19) DevilInside: also, i don't think you can transfer consciousness (01:04:00) DevilInside: you can maybe at one point create conscious entitites that aren't human (01:04:12) DevilInside: but you can't transfer the consciousness of a biological human to that, how is that possible? (01:04:46) DevilInside: of course most likely the future of cognition and understanding doesn't belong to anything resembling naturally evolved humans, and why should it? (01:05:00) DevilInside: even the smartest humans are deeply flawed wetware (01:05:30) DevilInside: evolution got us here, now we are going to start a completely different kind (01:06:07) DevilInside: so are these transhumanists saying that HUMANS are going to change into sometihng else? i think it'd be false to even call these entities human anymore and: http://video.google.com/videoplay?docid=8576072297424860224# So you are proposing that uploading will take off just like cellular telephones have? Absurd. If you argue with the people, they'll declare war on you. Only 0.5%-ers want to upload. > (Still, I can imagine things like Oxford's Port Meadow to remain. > Wikipedia: "In return for helping to defend the kingdom against the > marauding Danes, the Freemen of Oxford were given the 300 acres of > pasture next to the River Thames by Alfred the Great who founded the > City in the 10th Century. The Freemen's collective right to graze their > animals free of charge is recorded in the Domesday Book of 1086 and has > been exercised ever since." - there are usually some cows or horses > around, although their importance to the economy and to most people have > dwindled more orders of magnitude than were imaginable when king Alfred > was fighting vikings. So maybe there will be a few morphologically free > bodies frolicking somewhere on future M-brains, protected by regulations > laid down in the remote 21st century.) That sounds barely acceptable, I would like to apply for a reservation of ten acres. I had toyed around with such notions for several years, I even contemplated writing a story about such a scenario. In one permutation I lived for many centuries in an 8' cube, under constant torment of beings covetous of my atoms. I am not sure how I would have ended the story. One possible ending is that one instant I would just vanish, inexplicably to the m-brain, into another dimension or to a planet far out in free space. Another ending would have me mastering control over the fabric of reality, (the M-brain caring nothing of physics, only computing cores), and then unleashing a firestorm of vengance that would utterly obliterate the singleton. I must confess that there are other singleton scenarios that I find attractive but, for ethical reasons, don't actively advocate except, possibly, to try to dilute the insane stampede towards uploading and m-brains. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From kanzure at gmail.com Sat Jan 1 19:25:22 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 13:25:22 -0600 Subject: [ExI] Whatever happened to morphological freedom? In-Reply-To: <4D1F733C.1020906@speakeasy.net> References: <4D1EAFEF.80603@speakeasy.net> <4D1F0EDD.4010001@aleph.se> <4D1F733C.1020906@speakeasy.net> Message-ID: On Sat, Jan 1, 2011 at 12:32 PM, Alan Grimes wrote: > Why wouldn't such a society exist naturally? I started rambling about > uploading in the normally quiet channel #neuroscience on > irc.freenode.net, and the person who responded was like "Say what???" =P the transhumanist channel on freenode is #hplusroadmap please leave your computronium conspiracy theory behind, though-- we will not join your uploading cult, alan. - Bryan http://heybryan.org/ 1 512 203 0507 From msd001 at gmail.com Sat Jan 1 20:19:52 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 1 Jan 2011 15:19:52 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <20110101182448.GI16518@leitl.org> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <20110101182448.GI16518@leitl.org> Message-ID: On Sat, Jan 1, 2011 at 1:24 PM, Eugen Leitl wrote: > An synchronized copy doesn't just think alike, it thinks exactly > in unison. So you can't fool anybody else while you're fooling me. > The input is the exactly the same, the internal state is exactly > the same, the evolution at each clock tick is exactly the same, > the output is exactly the same. Are temporally distant copies also "in unison" ? If I make a Perfect Copy(tm) then throw it down a black hole, are we still synch'd? (time dilation is one thing, the fact that verifying information signals can't get out is another) From eugen at leitl.org Sat Jan 1 20:31:59 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 1 Jan 2011 21:31:59 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <20110101182448.GI16518@leitl.org> Message-ID: <20110101203159.GQ16518@leitl.org> On Sat, Jan 01, 2011 at 03:19:52PM -0500, Mike Dougherty wrote: > Are temporally distant copies also "in unison" ? They're not in unison, but by comparing the trajectories (or settling for checksum-like measurements) you can verify that a particular discrete evolution trajectory fragment is the same in both cases. This is even less useful than keeping two adjacent systems in synchrony, as you'd be limited to agents locked into their own virtual environment cages, completely deterministic evolution, no interaction with reality. > If I make a Perfect Copy(tm) then throw it down a black hole, are we If you march in unison, then blow one's instances brain's out, will you notice? (That's the basic idea behind HA setups with a hot failover, you can shoot one system with full impunity, other than reducing your redundancy level). > still synch'd? (time dilation is one thing, the fact that verifying > information signals can't get out is another) ILLEGAL INSTRUCTION DIV BY ZERO. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Sat Jan 1 20:18:50 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 1 Jan 2011 15:18:50 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> Message-ID: On Jan 1, 2011, at 11:31 AM, Samantha Atkins wrote: > Why drag soul into this? That was not my decision. > A perfect copy is not the original. According to some it is of extraordinary importance that a perfect copy is not the original, according to some there is an enormous difference between the original and the copy, even though that copy is PERFECT; and even though the scientific method can detect no difference at all, much less an enormous one, between the two. And that seems to sum up just what this unending discussion is about. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 1 21:12:36 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 01 Jan 2011 15:12:36 -0600 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <20110101164437.GB16518@leitl.org> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> Message-ID: <4D1F98C4.50109@satx.rr.com> On 1/1/2011 10:44 AM, Eugen Leitl wrote: >> Why drag soul into this? A perfect copy is not the original. >> > That is what this unending discussion seems to sum up to. OK. Fine. Next. > > A perfect copy is indistinguishable from the original. Except to the original, marched off to the dungeon. > Location is not > a label encoded within the copy, orelse it would be distinguishable. > See, it's easy. It's always easy to answer the wrong question. Damien Broderick From kanzure at gmail.com Sat Jan 1 21:35:55 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 15:35:55 -0600 Subject: [ExI] Garage Innovation - The Scientist - Rob Carlson In-Reply-To: References: Message-ID: On Sat, Jan 1, 2011 at 3:22 PM, Michal Galdzicki wrote: > Garage Innovation - The Scientist - Magazine of the Life Sciences > via www.the-scientist.com on 1/1/11 > http://www.the-scientist.com/article/display/57880/# > By Rob Carlson > The potential costs of regulating synthetic biology?must be counted against > putative benefits. """ What to do about biohackers in the garage? The apparent answer from the US Presidential Commission for the Study of Bioethical Issues, whose first task has been to examine the emerging field of synthetic biology, is ?prudent vigilance.? It isn?t just tinkerers who are intrigued by the prospect of building genes and genomes. Many scientists are discovering exciting new ways to use synthetic DNA. Moreover, the exponentially decreasing cost of such DNA has encouraged innovative approaches to making drugs, biofuels, and other materials. As early as next year, synthetic biology may be used to produce flu-vaccine strains in days to weeks, rather than the 12 months now required. Yet discussions of synthetic biology always include the din of warnings about artificial pathogens and Frankenstein experiments escaping the lab. Therein lies the rub for the commission: ?Let science rip,? in the words of chair Amy Gutmann of the University of Pennsylvania, or attempt to constrain access to an already globally commercialized technology. When I addressed the commission in July of last year, I emphasized the critical importance of small organizations in producing technological innovations. There is every reason to expect that garage innovation will be as important to biological technologies as it was to IT and dozens of others that we rely on every day. Consequently, one challenge the commission faces is to reconcile the concern for safe development with the drive for rapid development. Restriction of access to technology and markets would slow development. Regulation could result in a black market ?the worst possible outcome. Given the apparent power of the emerging toolkit of synthetic biology, it is too easy to call for restrictions, such as regulations and licensing, without pausing to account for the consequent potential costs. One possible strategy?restricting access to raw materials and markets?has had very clear negative consequences in the effort to reduce the production and consumption of illegal drugs. In the case of methamphetamine, the US Drug Enforcement Administration?s own reporting reveals that suppression of ?mom-and-pop? production has resulted in foreign manufacture that surpasses the domestic production it replaced. In the case of cocaine, restricted access to markets led drug cartels to build semisubmersible vessels that can carry illicit cargo worth hundreds of times the cost of the vessel itself. In both cases, the basic policy failure lay in the attempt to control tools and skills in the context of a market in which consumers are willing to pay prices that support use of those tools and skills. The potential negative consequences of regulating the synthetic biology toolkit are similar. Many questions must be addressed before implementing any such policy. For instance, what is the line dividing do-it-yourself biology from a start-up company operating in a garage? Should all individuals interested in learning about biotechnology be certified in some way? If so, that process will increase the costs of both education and innovation. What if those costs are so large that they discourage research and innovation, and thereby depress economic growth? Alternatively, what if the certification costs are large enough, but the physical barriers to use low enough, that it is possible to avoid certification while engaging in backroom research and development? What if backroom R&D finds a demand for illicit products at prices that encourage avoiding certification?the very definition of a black market? As the meth and cocaine examples demonstrate, many policies intended to increase safety and security turn out to be counterproductive in practice. Regulation of synthetic biology could result in a black market?the worst possible outcome, and one that should be avoided as an unbearable cost. Everyone involved in this conversation wants to maximize safety and security. Regulation might be an appropriate mechanism toward this end, but it must be smart regulation. Proposals to regulate are every bit as deserving of ?prudent vigilance? as the field of synthetic biology itself. Dr. Rob Carlson is a Principal at Biodesic, an engineering, consulting, and design firm in Seattle. At the broadest level, Carlson is interested in the future role of biology as a human technology. He has worked to develop new biological technologies in both academic and commercial environments, focusing on molecular measurement and microfluidic systems. Carlson is the author of the book Biology is Technology: The Promise, Peril, and New Business of Engineering Life, published in 2010 by Harvard University Press. Carlson earned a doctorate in Physics from Princeton University in 1997. Links to additional articles and his blog can be found here. """ -- - Bryan http://heybryan.org/ 1 512 203 0507 From hkeithhenson at gmail.com Sun Jan 2 01:39:37 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 1 Jan 2011 18:39:37 -0700 Subject: [ExI] Spacecraft (was MM) Message-ID: On Sat, Jan 1, 2011 at 1:19 PM, Samantha Atkins wrote: > On Jan 1, 2011, at 2:40 AM, Keith Henson wrote: > >> On Fri, Dec 31, 2010 at 11:07 PM, ?Samantha Atkins wrote: snip >>> Using the standard 1 MW/kg gives 300 GW for a 300 ton vehicle, ?50 GW for a 50 ton vehicle. ?Lasers are generally 10% power efficient so 10x the output power is needed to drive them. ?What is the joke? >> >> 1MW/kg is what you need to boost against 1 g. ?The trick here is to >> get up high burning hydrogen and air with a substantial vertical and >> horizontal velocity before the laser takes over powering propulsion. >> Then you use a *long* acceleration to reach orbital velocity. ?See >> figure 4 here http://www.theoildrum.com/node/5485 for a typical >> trajectory. >> >> And laser diodes are now 50% efficient with an ongoing development >> project projected to reach 85%. ?This is monochromatic rather than >> coherent but the light can be converted to coherent at a loss of 10% >> or less. > > > I don't think you can quote laser diode efficiency when talking about these very high powered lasers without talking about the pumping methods, light and heat damage to components and so on. I was mainly making the point that your number of 10% efficient is way out of date. Light damage is mainly a problem with ablation propulsion, heat you just design to deal with. 6 GW would not be one laser, but more than a thousand 1-5 MW units. The largest continuous laser is a truck mounted military one rated at 105 kW. Internally it is a number of 15 kW units. >>> >>>> Based on >>>> Jordin Kare's work, this takes a flotilla of mirrors in GEO. ?Current >>>> space technology is good enough to keep the pointing error down to .7 >>>> meters at that distance while tracking the vehicle. ?The lasers don't >>>> need to be on the equator so they can be placed where there is grid >>>> power. ?They need to be 30-40 deg to the east of the lunch point. >>>> >>> >>> Uh huh. ?What is the max distance you are speaking of? >> >> Around one sixth of the circumference 40,000/6, 6,666 km. > > That amounts to about 0.002 MOA tracking a rocket through atmosphere. MOA? > If we can do that then we can shoot down any old missile, any time with perfect accuracy. The possibility of the laser beam going off target for some reason is why you want a long path to the east over water. But yes, this transport method does have some rather obvious military applications. A 6 GW laser beam delivers the energy of 1.5 tons of TNT per second. snip > The current record for a small test vehicle climbing an admittedly low power beam is measured in the hundreds of feet. The ones that have gone up a few hundred feed are not related at all to this kind of setup. They only work in the atmosphere. This works best outside. > A power beam that strong would bring issues of whether it would propel or melt the nozzles. ?If the beam got a bit off center then it could be a real danger to the rocket itself which presumably is not of a high melting point alloy such as the nozzles would be. The current thoughts on the design has the laser beam going through a sapphire window filled with cold flowing 10-20 bar hydrogen. 6 GW sounds like a lot, but it is absorbed over close to 1000 square meters. So that's 6 MW per square meter. That's in the range of what happens inside the fire box of a coal fired power plant. Thought about on a smaller scale, it's 600 W per square cm. It's not hard to imagine a 1 cm square hole dumping 600 watts of heat into a flowing stream of hydrogen and heating the gas to 3000 deg K. Regen cooling keeps the nozzle from getting too hot. > The aiming is by no means trivial. I didn't mean to give the impression it was. However, the pointing accuracy of Hubble is less than a meter from GEO to the trajectory path. Tracking is slow traversing about 8 deg in 900 sec. > Nor is the amount of power needed by the lasers. It's a huge consideration. At 50% overall, the grid draw would be 12 GW. On the other hand, Three Gorges is 22 GW. > How do the orbital mirrors station keep reflecting that intense a power beam? It's not particularly intense. The mirrors in GEO are 30 meters across. > What is the required station keeping and mirror adjustment speed? You can compensate for the light pressure by orbiting 4 km inside GEO. Tracking is as above, slow. > What kind of lasers do you have in mind for this application. ?This site, http://www.rp-photonics.com/high_power_lasers.html, doesn't lead me thing multi GW lasers are particularly straightforward especially no for such sustained high precision power levels. I don't understand why you think high precision power levels are required. > The most powerful ground based lasers I could find were anti-missile lasers that seemed to top out at 10 MW or so. ?These were not atmosphere compensated. ?How much power will you lose to atmosphere compensation? ?I understand thus far that atmospheric self-focusing only works in narrow power ranges defined by the type of laser used, atmospheric conditions and amount of atmosphere to be traversed. ? All of this doesn't lead me to belief this is so straightforward. I really don't like arguments from authority, but Dr. Jordin Kare http://en.wikipedia.org/wiki/Jordin_Kare knows far more about this than I do. However, the proposal does not use power levels where you get atmospheric distortions. Clouds at the laser end will be a problem. Keith From jebdm at jebdm.net Sun Jan 2 01:47:00 2011 From: jebdm at jebdm.net (Jebadiah Moore) Date: Sat, 1 Jan 2011 20:47:00 -0500 Subject: [ExI] sound archive In-Reply-To: <002501cba9dc$ff2e1320$fd8a3960$@att.net> References: <002501cba9dc$ff2e1320$fd8a3960$@att.net> Message-ID: Not to detract from your main point (which I think is an interesting idea), but I'm pretty sure those lighters didn't go away. I remember seeing them around my whole life, and I'm only 19; in fact, they were quite popular among a number of boys, who would flip them open and closed constantly, trying to perfect the act of flicking them open and lighting them in a single fluid motion. I've never heard of the brand Rossignol (which Google thinks is a winter gear manufacturer)--perhaps you mean Ronson, which produces Ronsonol lighter fluid (which is just naptha)? Anyways, Ronson is now owned by Zippo, which is the most common producer of these types of lighters (and in fact the name Zippo has been genericized to mean all lighters of this type). They're not hard to find, either--I see them all the time at gas stations and drug stores. They're also frequently available at flea markets, carnivals, fairs, gift stores, and other such places. I'm guessing you can also get them at most head shops and tobacco stores, though I haven't been to either so I wouldn't know. More to the point, there are a number of sound archives out there, although I don't know of any (and couldn't quickly find) any with the specific purpose you stated. (This one even has the sound you're thinking of, although it's not a free archive: http://www.audiosparx.com/sa/search/home_srchpost.cfm?target=zippo). Given how many such recordings there are out there, it probably wouldn't be too hard to put together a list of obsolete sounds; I suppose the difficult part would be trying to remember sounds that don't happen any more, or thinking of sounds that won't happen any more in the future. -- Jebadiah Moore http://blog.jebdm.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Jan 2 04:34:38 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 1 Jan 2011 23:34:38 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D1F98C4.50109@satx.rr.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> Message-ID: <958EE705-A931-4AF3-B53F-F7E1ED1D8EBF@bellsouth.net> On Jan 1, 2011, at 4:12 PM, Damien Broderick wrote: >> A perfect copy is indistinguishable from the original. > > Except to the original, marched off to the dungeon. No, you are entirely incorrect. Even the original can't distinguish the copy from the original, nor can the copy, nor can any other part of the universe; but of course if you march one off to a dungeon and not the other then they are no longer identical. There is nothing spooky or any great mystery to the fact that due to different circumstances identical things can diverge. Identical adjectives can too. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Sun Jan 2 02:48:13 2011 From: agrimes at speakeasy.net (Alan Grimes) Date: Sat, 01 Jan 2011 21:48:13 -0500 Subject: [ExI] My transhumanist yahoogroups. Message-ID: <4D1FE76D.60505@speakeasy.net> Yahoogroups bought out all the other mailing list providers in the '90s and have since faded and are now moribund... Yet still, I'm a member of 37 of them. Here is a selected list of those groups that may be of some interest here. http://groups.yahoo.com/group/massconjoinment/ Founded: Jan 29, 2000 (by me), 707 members, and I do actively maintain the group. http://groups.yahoo.com/group/boobgirls/ Founded: May 8, 2000, again by me, 139 members. http://tech.groups.yahoo.com/group/thresearch/ Founded: May 27, 2000 (by me), only 26 members, but I continue to hope that something useful will come of it. =( http://groups.yahoo.com/group/breaststoobigtocarry/ founded: Aug 19, 2001, I am the primary active moderator, not much activity but 894 members. http://groups.yahoo.com/group/extremehugebustfetishes/ Founded: May 23, 2001, I am the primary active moderator, not much activity, 886 members. http://tech.groups.yahoo.com/group/a2i2Tech/ << dead http://groups.yahoo.com/group/AI-Arms-Race/ << run by a demented individual, some posters post some good alternative news there, sometimes someone interesting shows up. http://groups.yahoo.com/group/Android_Companions/ http://tech.groups.yahoo.com/group/arcondev/ << dead and trashed, still hold faint hope for its revival. http://tech.groups.yahoo.com/group/artificialintelligencegroup/ << a lot of Iranians and Pakistanis but it's a good active group with lots of members, not very focused though. http://tech.groups.yahoo.com/group/behavioranalysisandrobotics/ http://groups.yahoo.com/group/borg_collective/ << I wish this list were more active, there's only one other reliable person on that list. =\ http://tech.groups.yahoo.com/group/dcfuture/ << a group briefly organized by the great and powerful GOERTZEL. http://groups.yahoo.com/group/HumanRobotics/ http://tech.groups.yahoo.com/group/machine-learning/ http://tech.groups.yahoo.com/group/minimailist/ << An AI group. http://tech.groups.yahoo.com/group/neuroscience_and_the_mind/ http://groups.yahoo.com/group/SHAI-Now-Feasibility-Study/ << run by same crackpot, I think... http://groups.yahoo.com/group/technocalypse/ << the only group where I've been both very active and not banned. Good group but somewhat dormant. http://groups.yahoo.com/group/TransAct/ << an attempt to form a political wing of the transhumanist movement. http://groups.yahoo.com/group/transtopia/ http://tech.groups.yahoo.com/group/v-humans/ -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From kanzure at gmail.com Sun Jan 2 05:16:48 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 23:16:48 -0600 Subject: [ExI] Fwd: Personal genomics and DIYbio event in Singapore Message-ID: ---------- Forwarded message ---------- From: Denisa Kera Date: Sat, Jan 1, 2011 at 11:05 PM Subject: Event in Singapore To: diybio at googlegroups.com If anyone is around vacationing, come and meet us :-) There is a conference "Asian Biopoleis: Biotechnology & Biomedicine as Emergent Forms of Life and Practice" starting on January 5, programme http://www.ari.nus.edu.sg/showfile.asp?eventfileid=603 More info about the whole project http://www.ari.nus.edu.sg/events_categorydetails.asp?categoryid=6&eventid=1093 On January 8 there will be a panel on consumer genomics & DIYbio in the local Biopolis I will mention some of the cooking experiments in our local Hackerspace and one project in Indonesia (HONF). Panel: 10:00 Sandra Soo?Jin LEE Center for Biomedical Ethics, Stanford University Race, Risk and Recreation in Personal Genomics 10:30 Takashi KIDO Riken Genesis. Co., Ltd. Genetics and Artificial Intelligence for Personal Genome Services 11:00 Denisa KERA Communications and New Media Programme, National University of Singapore Emerging Citizen Science Incubators and Projects in Asia -- You received this message because you are subscribed to the Google Groups "DIYbio" group. To post to this group, send email to diybio at googlegroups.com. To unsubscribe from this group, send email to diybio+unsubscribe at googlegroups.com . For more options, visit this group at http://groups.google.com/group/diybio?hl=en. -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sun Jan 2 05:17:53 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 1 Jan 2011 23:17:53 -0600 Subject: [ExI] My transhumanist yahoogroups. In-Reply-To: <4D1FE76D.60505@speakeasy.net> References: <4D1FE76D.60505@speakeasy.net> Message-ID: On Sat, Jan 1, 2011 at 8:48 PM, Alan Grimes wrote: > and have since faded and are now moribund... Yet still, I'm a member of > 37 of them. Here is a selected list of those groups that may be of some > interest here. Here's an outdated list of my subscriptions: http://heybryan.org/mailing_lists.html Apparently Google only lets you subscribe to 200 "Google Groups" at a time. - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 2 05:55:54 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 01 Jan 2011 23:55:54 -0600 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <958EE705-A931-4AF3-B53F-F7E1ED1D8EBF@bellsouth.net> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <958EE705-A931-4AF3-B53F-F7E1ED1D8EBF@bellsouth.net> Message-ID: <4D20136A.8000806@satx.rr.com> On 1/1/2011 10:34 PM, John Clark wrote: > No, you are entirely incorrect. Even the original can't distinguish the > copy from the original, nor can the copy, nor can any other part of the > universe John, you keep skittering off into abstractions that have nothing to do with the cases that were stipulated. For the seventy-leventh time: the question is NOT "will the copy feel as if it's me?" Everyone agrees that a good enough copy must by definition feel, think, act like me (until our experiences-in-the-world differ sufficiently). The strongest form of the evaded question remains: if you, here and now, have to be destructively scanned in order to build a replica who will feel just like you, will you happily agree to dying in order that this copy will go on afterwards? This can be via a Star Trek transporter or through the ablative scanning of a vitrified cryonic brain. And the question to answer is: what is MY stake in being destroyed in order that HE will be created? I realize that your extremely reductive scientism will reply that this is a meaningless question because all hydrogen atoms are identical, blah blah, and I can only gaze at this specious response with astonishment and a bit of indignation. Incidentally, it seems to me likely that your libertarianism might have something to do with how you abstract away from any social context to your thought experiment. Your spherical cows seem to inhabit a world empty of any history, honesty, trust, reliable records, mutual observation, any of the practices by which real humans recognize each other diachronically and tell each other apart, even if they are identical twins whom other people have trouble distinguishing. Damien Broderick From stathisp at gmail.com Sun Jan 2 05:54:00 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 2 Jan 2011 16:54:00 +1100 Subject: [ExI] Book: "The Immortality Edge" In-Reply-To: References: Message-ID: On 31/12/2010, at 2:20 PM, John Grigg wrote: > Any opinions about this book? > > http://www.kurzweilai.net/the-immortality-edge-realize-the-secrets-of-your-telomeres-for-a-longer-healthier-life What evidence is there that diet or other lifestyle factors affects telomeres, and what evidence is there that any such effect would increase longevity? It sounds like pseudoscience. From spike66 at att.net Sun Jan 2 07:23:54 2011 From: spike66 at att.net (spike) Date: Sat, 1 Jan 2011 23:23:54 -0800 Subject: [ExI] sound archive In-Reply-To: References: <002501cba9dc$ff2e1320$fd8a3960$@att.net> Message-ID: <006501cbaa4e$034a67d0$09df3770$@att.net> . On Behalf Of Jebadiah Moore Subject: Re: [ExI] sound archive .Not to detract from your main point (which I think is an interesting idea), but I'm pretty sure those lighters didn't go away.I've never heard of the brand Rossignol (which Google thinks is a winter gear manufacturer)--perhaps you mean Ronson. Ronson it is, thanks. I was going on very old memories, and I did remember the fluid Ronsonol, confused it with Rossignol the ski manufacturer. .More to the point, there are a number of sound archives out there, although I don't know of any (and couldn't quickly find) any with the specific purpose you stated.-- Jebadiah Moore Do let me get to the specifics that caused me to think of the idea. When one fires up an Apple computer, there is a chord as played on a guitar, which always sounded familiar to me, but I could not place it. Not being a guitarist, but somewhat of a music hipster, I thought it a diminished 7th chord in F, or possibly an F minor 7th, but I could be mistaken. Yesterday I heard that chord on the radio, which was immediately followed by the comment: It's been a hard day's night And I've been working like a dog. And so forth, then I remembered where I had heard that chord, so very many years ago in my own misspent youth. Question: if I had the chord and could reproduce it on a guitar or could get reasonably close, how can we conceive a google-like device to get from the sound to text describing it? If I knew the song, I could easily google a YouTube of the chord, but if all I have is the chord, or the tune without any lyrics, I need a googly tool to get to the song. I could imagine reducing a rhythm to a googlable form, but not really a chord or a tune. Regarding our current efforts at scanning and OCRing old books to get these into ASCII text, this is all admirable indeed. What I am proposing is an audio version of preservation of the past. The sound of a Ronson lighter, the buzz of an icicle on the cooling fan of an antiquated refrigerator, these are the kinds of things that defy efforts at reduction to ASCII files. By still further extension, could we not attempt to somehow preserve extinct smells? While doing genealogical research, I found a family who still heated their home with coal. That gives the home a distinctive, not unpleasant smell. How many here have smelled a home heated by coal? Is there any way to create a googlish device that can get us from a smell to a descriptive text or website? These sensations will soon be lost forever to humanity, as was the wisdom in the library of Alexandria in the fire. If we can figure out how to google a smell or a sound, we may save these sensations. Otherwise the fires of Alexandria burn brightly and tragically. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jan 2 07:50:07 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 2 Jan 2011 18:50:07 +1100 Subject: [ExI] Japan nano-tech team creates palladium-like alloy In-Reply-To: References: Message-ID: On 31/12/2010, at 2:31 PM, John Grigg wrote: > An exciting development in terms of breaking off expensive > dependencies with foreign nations... > > http://www.physorg.com/news/2010-12-japan-nano-tech-team-palladium-like-alloy.html It depends on the proportion of rhodium used and the cost of the process. Rhodium is the most expensive of the platinum group metals, three times the price of palladium. From jonkc at bellsouth.net Sun Jan 2 08:37:05 2011 From: jonkc at bellsouth.net (John Clark) Date: Sun, 2 Jan 2011 03:37:05 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D20136A.8000806@satx.rr.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <958EE705-A931-4AF3-B53F-F7E1ED1D8EBF@bellsouth.net> <4D20136A.8000806@satx.rr.com> Message-ID: <12663940-D876-45E7-8FC8-3DE5F451A3E6@bellsouth.net> On Jan 2, 2011, at 12:55 AM, Damien Broderick wrote: > you keep skittering off into abstractions I keep taking common superstitions about personal identity and use nothing but simple logic to expose the absurdity within; you haven't even tried to counter my arguments, instead you simply say its all "blah blah" and continue to restate your tired old beliefs which comes from your gut not your brain. If my views are just skittering abstractions it should be easy to expose them for what they are but instead you just come off sounding like Samuel Wilberforce in the great Evolution debate in the 19'th century. > the question is NOT "will the copy feel as if it's me?" So apparently you feel that you will feel like you but you will not be you while you want to feel like you feel today and feel like you and you are you. Huh? > The strongest form of the evaded question remains: if you, here and now, have to be destructively scanned Destructively? I don't have to read any more, if I am to be destroyed then I most certainly would not agree to anything, but I want you to explain very clearly what is being done to me and how that action results in my destruction. > in order to build a replica who will feel just like you, will you happily agree to dying in order that this copy will go on afterwards? Dying means having a last thought, if your action resulted in that then I would never agree to it, but you're not talking about that; in fact agreement is not even an issue because what you're talking about is so inconsequential I wouldn't even know it had happened unless you told me and even then I probably wouldn't believe you. By the way, I'm not even a big fan of definitions yet I have an exact definition of death. Do you? > > what is MY stake in being destroyed in order that HE will be created? I cannot answer the above query because in this context the symbol "MY" and the symbol "HE" mean the exact same thing rendering the question gibberish. > I realize that your extremely reductive scientism will reply that this is a meaningless question Give that man a cigar! But if you have an alternative to "scientism" that you think is better at discovering the true nature of things I would very much like to hear about it. > because all hydrogen atoms are identical, blah blah, and I can only gaze at this specious response with astonishment and a bit of indignation. So we are reduced to this lowbrow state, scientific arguments deserve our anger because by their very nature they are false and we should all just think with our gut. I disagree, I see no reason why Evolution would endow us with intuition that was correct in a matter like this because up to now it would have given an animal no survival advantage. But times change and biological Evolution is far too slow to keep up, so if we expect to survive we must use our brain not our intestinal tract. > it seems to me likely that your libertarianism might have something to do with how you abstract away from any social context to your thought experiment. The existing social, economic, or geopolitical structure of our culture has zero relevance to the issues we are talking about. > Your spherical cows seem to inhabit a world empty of any history, honesty, trust, reliable records, mutual observation, any of the practices by which real humans recognize each other diachronically and tell each other apart, even if they are identical twins whom other people have trouble distinguishing. So in the past people have viewed the nature of identity in a certain way thus we must continue viewing things that same way even though the world is radically changing. I disagree. And stop with the silly identical twins business, my sisters are identical twins and I know full well they have different personalities, and even as a kid I could always tell them apart even if few others could. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jebdm at jebdm.net Sun Jan 2 09:20:33 2011 From: jebdm at jebdm.net (Jebadiah Moore) Date: Sun, 2 Jan 2011 04:20:33 -0500 Subject: [ExI] sound archive In-Reply-To: <006501cbaa4e$034a67d0$09df3770$@att.net> References: <002501cba9dc$ff2e1320$fd8a3960$@att.net> <006501cbaa4e$034a67d0$09df3770$@att.net> Message-ID: 2011/1/2 spike > Question: if I had the chord and could reproduce it on a guitar or could > get reasonably close, how can we conceive a google-like device to get from > the sound to text describing it? If I knew the song, I could easily google > a YouTube of the chord, but if all I have is the chord, or the tune without > any lyrics, I need a googly tool to get to the song. I could imagine > reducing a rhythm to a googlable form, but not really a chord or a tune. > Well, there are a few music search engines which take audio and try to identify a song from that audio (for instance, http://www.soundhound.com/). The concept ought to be generalizable, but more difficult when you get less domain-specific (since I imagine SoundHound and co do some significant dimension reduction). See http://labs.ideeinc.com/ for some general work in this area with images. Smells would require an IO device for smells, obviously, but would probably be even easier to search (since input on smells would probably require recording what chemicals are in the air and their ratios, and searching a database using that information would be pretty simple). -- Jebadiah Moore http://blog.jebdm.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sun Jan 2 09:50:00 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 2 Jan 2011 10:50:00 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D1F98C4.50109@satx.rr.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> Message-ID: <20110102095000.GS16518@leitl.org> On Sat, Jan 01, 2011 at 03:12:36PM -0600, Damien Broderick wrote: > On 1/1/2011 10:44 AM, Eugen Leitl wrote: > >>> Why drag soul into this? A perfect copy is not the original. >>> > That is what this unending discussion seems to sum up to. OK. Fine. Next. >> >> A perfect copy is indistinguishable from the original. > > Except to the original, marched off to the dungeon. AARGH. Once and for all, in two synchronized instances either two of them are marched off (in lockstep, sharing a single point of eyes), or none. Why is it so difficult to understand synchronized systems? EVERYTHING in two synchronized systems must be exactly the same. No exceptions. Zero. None. Did everyone copy that? Perfect copies are indistinguishable. If they were they would be not perfect copies. >> Location is not >> a label encoded within the copy, orelse it would be distinguishable. >> See, it's easy. > > It's always easy to answer the wrong question. It is exactly the right question. Trust me, I'm clairvoyant. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From avantguardian2020 at yahoo.com Sun Jan 2 09:38:17 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 2 Jan 2011 01:38:17 -0800 (PST) Subject: [ExI] simulation as an improvement over reality. Message-ID: <205106.56620.qm@web65601.mail.ac4.yahoo.com> >From: John Clark >To: ExI chat list >Sent: Fri, December 31, 2010 9:53:05 PM >Subject: Re: [ExI] simulation as an improvement over reality. > > >On Dec 31, 2010, at 5:28 PM, The Avantguardian wrote: > > >The soul? Is that what you think this is about?? -----------John wrote--------------------------- Yes, that is exactly what I think this is about. You say the copy is perfect but it is nevertheless missing something; leaving aside the obvious illogic of such a thing, what exactly is this secret sauce that the original has that the copy does not? You say it is not information, and you'd better say it is not atoms or you will end up inundated in absurdities, so this mysterious ingredient must be something else entirely and it is of enormous importance too, but for some unknown reason it cannot be explained or even detected by the scientific method. There is already a word in the English language for something like that, but I can't really blame you, I'd feel pretty foolish using The Word That Must Not Be Named too.? --------------------------------------------------- I am not saying that there is something "missing" from the copy. I am saying that both the original and?the copies will have unique reference frames.?These reference frames will be physical in the sense that they will sweep out?distinct world lines in space-time, and mental in the sense that their brains will perceive/construct a map or model of the world with their particular instance?of self at the origin of a comoving reference frame. Call it the autocentric sense, if you will, since it is the perception that?ones consciousness lies at the center of the?universe. ? --------John Wrote------------------------------------------- Congratulations, you have discovered that (some) things happen at a particular place at some time; but of course adjectives like you and me do not. -------------------------------------------- I know you are fond of thinking?that the?pronouns you and me are adjectives. But I think they are more like?prepositions. The label "you" implies "over there". Me implies "here". ----------------John wrote-------------------------------- Talking about?space-time coordinates does sound much more scientific than mundane time and place, even if it means the same thing and brings nothing new to the conversation. And if?that is the secret of identity it leads to some peculiar conclusions, you become a completely unrelated person from one second to the next, or when you move from one place to another, a totally different person who's continued consciousness is of?absolutely no interest to you,?other than that of empathy. And yet despite it all somehow I seem to continue, how odd. ------------------------------------------------------- ? Yes you continue there where you are. Like Buckaroo Banzai once said, "wherever you go, that's where you are". You don't feel like?a different person?by?moving from one spatial coordinate to another because the reference frame moves with you i.e. it is co-moving.?That's not to say that the?autocentric sense?can't be fooled as the links posted by Amara describe ways that it can be experimentally manipulated. But it is still an important aspect of consciousness IMO.?? > If my copy does not occupy?my position in space and time, it is not me. -----------John wrote------------------------------- Even if that were true, and I have no reason to think it is, how do you even know what position you or your copy are in? If you exchange the position of you and a identical copy of you in a symmetrical room neither you nor the copy will notice the slightest difference, an outside observer will notice no difference either. The very universe itself will not notice that any exchange has occurred. Objectively it makes no difference and subjectively it makes no difference. If the difference is not objective and the difference is not subjective?then that rather narrows down your options in pointing out just where that difference is.? -------------------------------------------------------------- ? The autocentric sense does not track?your absolute position in space, there is no such thing, but?your position relative to external objects including any copies of you that may be around. And regardless of your autocentric sense, you have a physical position and associated reference frame relative to the fixed stars.?Playing?some kind of gedanken shell game with your copies does not change the fact that if one of your copies slapped you in the face, you would not wonder?how you came to inadvertantly slap?yourself, instead you would correctly?perceive that someone else has slapped you.?? ? ? Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From eugen at leitl.org Sun Jan 2 10:39:32 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 2 Jan 2011 11:39:32 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <205106.56620.qm@web65601.mail.ac4.yahoo.com> References: <205106.56620.qm@web65601.mail.ac4.yahoo.com> Message-ID: <20110102103932.GW16518@leitl.org> On Sun, Jan 02, 2011 at 01:38:17AM -0800, The Avantguardian wrote: > I am not saying that there is something "missing" from the copy. I am saying > that both the original and?the copies will have unique reference frames.?These No, because then they would cease to be original and the copy. > reference frames will be physical in the sense that they will sweep out?distinct > > world lines in space-time, and mental in the sense that their brains will Space is not labeled. > perceive/construct a map or model of the world with their particular instance?of > > self at the origin of a comoving reference frame. Call it the autocentric sense, Then they bifurcate, and become two people. Because, you get it, then they would be distinguishable. > if you will, since it is the perception that?ones consciousness lies at the > > center of the?universe. Where is the center of universe? You measure your position using instruments relatively to other objects. In case of synchronized copies such measurements MUST ALWAYS PRODUCE THE SAME RESULT. ? As long as you people are stuck in muddled thinking you will not make progress. This is easy, why have so many people have such troubles to get it? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Sun Jan 2 13:08:34 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 3 Jan 2011 00:08:34 +1100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D1F98C4.50109@satx.rr.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> Message-ID: On Sun, Jan 2, 2011 at 8:12 AM, Damien Broderick wrote: >> A perfect copy is indistinguishable from the original. > > Except to the original, marched off to the dungeon. It's not the original that marches off to the dungeon. The process of marching causes the atoms at {x,y,z,t} to vanish utterly from the universe and (somewhat) similar atoms to appear at new coordinates {x',y',z',t'}. -- Stathis Papaioannou From stefano.vaj at gmail.com Sun Jan 2 15:28:25 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 2 Jan 2011 16:28:25 +0100 Subject: [ExI] Meat v. Machine (was simulation) In-Reply-To: <586924.64702.qm@web65615.mail.ac4.yahoo.com> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> Message-ID: On 29 December 2010 05:50, The Avantguardian wrote: > Imagine you are on a desert island with Dr. Jeckyll and dozens of innocent > bystanders. Dr. Jeckyll offers to share an elixer with you that he strongly > believes will transform anyone who quaffs it into a hirsute, immensely > strong, > and violently homicidal brute. He tells you that he will almost certainly > drink > it but, "if you don't like it, you don't have to do it." What would you do? > I would expect my genes to whisper "drink it, drink it" ;-) But, hey, for natural selection to work there must be always somebody who refuses to listen... ;-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 2 16:56:47 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 02 Jan 2011 10:56:47 -0600 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <20110102095000.GS16518@leitl.org> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> Message-ID: <4D20AE4F.1090501@satx.rr.com> On 1/2/2011 3:50 AM, Eugen Leitl wrote: >>> >> A perfect copy is indistinguishable from the original. >> > >> > Except to the original, marched off to the dungeon. > > AARGH. Once and for all, in two synchronized instances > either two of them are marched off (in lockstep, sharing > a single point of eyes), or none. > > Why is it so difficult to understand synchronized systems? But this is where any such discussion here keeps going off the rails. Who cares about "perfectly synchronized systems"? The only relevance of "copying" I can imagine is recovery from cryonic arrest (where you have to hope nobody is stupid enough to perfectly copy your vitrified brain and leave it at that) or uploading to a computer substrate VR or robot body (where by definition the copy does not share the same viewpoint with the biological original). Fanciful thought experiments that have you stepping into a black box and two of you emerging instantly in lockstep are uninteresting and irrelevant to the real world, even if Lem or John Varley or Greg Egan or Franz Kafka could write an amusing story on such a premise. Damien Broderick From webmaster at happyipads.com Sun Jan 2 04:19:41 2011 From: webmaster at happyipads.com (webmaster at happyipads.com) Date: 1 Jan 2011 20:19:41 -0800 Subject: [ExI] invites you. Message-ID: <20110102041941.32171.qmail@vps20117.managemyvps.com> Dear ExI chat list, You have been invited by your friend to participate in a research program. Currently there are companies that are looking for individuals who are interested in reviewing and testing the new Apple iPad applications and games. After the review the participants may keep the iPad. For more details or to register to our program, follow the link below: http://www.happyipads.com Best Regards, and BTest Team. ___ This message was intended for extropy-chat at lists.extropy.org, and was sent on behalf of From eugen at leitl.org Sun Jan 2 17:59:07 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 2 Jan 2011 18:59:07 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D20AE4F.1090501@satx.rr.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> <4D20AE4F.1090501@satx.rr.com> Message-ID: <20110102175907.GD16518@leitl.org> On Sun, Jan 02, 2011 at 10:56:47AM -0600, Damien Broderick wrote: > But this is where any such discussion here keeps going off the rails. > Who cares about "perfectly synchronized systems"? The only relevance of How else are supposed to survive a direct nuke hit? By being a distributed system, with multiple mirror nodes in the cloud, of course. Now, that was easy, wasn't it? As an example, consider a firewall cluster with stateful failover (I'm configuring such a thing at the moment). Two machines do operations on network traffic streams. The mapping is stateful, so the failover box must receive all state updates in realtime. If one of them drops dead (is shot with a shotgun from a close distance by a disgruntled server monkey), the second system takes over seamlessly. Nobody outside notices a damn thing, that's the whole point of stateful failover. This is only possible because the systems operate in lockstep. The firewall is dead, long live the firewall. > "copying" I can imagine is recovery from cryonic arrest (where you have > to hope nobody is stupid enough to perfectly copy your vitrified brain > and leave it at that) or uploading to a computer substrate VR or robot Yeah, resurrection in the flesh is pretty old-fashioned. And way too expensive, for the same reason why manned spaceflight is on the way out. > body (where by definition the copy does not share the same viewpoint > with the biological original). Fanciful thought experiments that have That's the problem with people who say they're the same person we talked to yesterday. They're complete impostors. How can they *prove* they're the same? In fact, it is easy to prove they're impostors. The turnover rate for parts of the CNS is less than 24 hours. They're not made from the same molecules! Zombies, all of them. Everyone out there is a zombie, except for me, of course. The Capgras patients were not delusional, after all! I see dead people! > you stepping into a black box and two of you emerging instantly in > lockstep are uninteresting and irrelevant to the real world, even if Lem > or John Varley or Greg Egan or Franz Kafka could write an amusing story > on such a premise. Au contraire, they demonstrate that you can be in multiple places at the same time (though you won't be able to enjoy the scenary). It destroys the idea that your location is a label, and the world lines matter. It prepares the ground for understanding why whole body/brain emulation from a destructive cryopreserved animal is not fundamentally different from a mere deep hypothermia lacune. True believers into continuity magic better eschew general anaesthesia. Or sleep. That should take care of the problem. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Sun Jan 2 17:52:39 2011 From: jonkc at bellsouth.net (John Clark) Date: Sun, 2 Jan 2011 12:52:39 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D20AE4F.1090501@satx.rr.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> <4D20AE4F.1090501@satx.rr.com> Message-ID: <06B00CD6-E899-48D6-87A4-9896376EFC1F@bellsouth.net> On Jan 2, 2011, at 11:56 AM, Damien Broderick wrote: > Fanciful thought experiments that have you stepping into a black box and two of you emerging instantly in lockstep are uninteresting and irrelevant to the real world Why? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Jan 2 18:20:23 2011 From: pharos at gmail.com (BillK) Date: Sun, 2 Jan 2011 18:20:23 +0000 Subject: [ExI] invites you. In-Reply-To: <20110102041941.32171.qmail@vps20117.managemyvps.com> References: <20110102041941.32171.qmail@vps20117.managemyvps.com> Message-ID: On Sun, Jan 2, 2011 at 4:19 AM, happyipads wrote: > > Dear ExI chat list, > > You have been invited by your friend ?to participate in a research program. Currently there are > companies that are looking for individuals who are interested in reviewing and testing the > new Apple iPad applications and games. > After the review the participants may keep the iPad. > > For more details or to register to our program, follow the link below: > Just in case anyone was wondering, this is a variant of the free ipads scam that has been circulating for about a year. This appears to be a spam only variant. (Some ask for money up front or identity theft information). This email came from ilsa bartlett's gmail account which has been taken over by the spammers. They have sent the same email to everyone in her address book. And soon all their fellow spammers will be sending out spam as well from this account. Isla, you should try and recover your gmail account as soon as possible. If you are lucky, changing the password might be enough, but some people have found that this doesn't stop them. (And, by the way, if you intentionally signed up for this, you won't get an ipad, only lots of spam). BillK From jonkc at bellsouth.net Sun Jan 2 18:51:14 2011 From: jonkc at bellsouth.net (John Clark) Date: Sun, 2 Jan 2011 13:51:14 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <205106.56620.qm@web65601.mail.ac4.yahoo.com> References: <205106.56620.qm@web65601.mail.ac4.yahoo.com> Message-ID: <97200CE3-F162-4519-89A0-60560F15ACE9@bellsouth.net> On Jan 2, 2011, at 4:38 AM, The Avantguardian wrote: > > I am not saying that there is something "missing" from the copy. I am saying > that both the original and the copies will have unique reference frames. In my thought experiment the two were not moving with respect to each other so I see absolutely nothing unique about their reference frames, and even if they were I'll be dammed if I can see why it would matter. And anyway I thought you said the copies were perfect > These reference frames will be physical in the sense that they will sweep out distinct > world lines in space-time Space-time lines of what, Space-time lines of every atom that was once part of your body including that atom you pissed down the toilet when you were in the third grade? > Call it the autocentric sense, if you will Yet another euphemism for the soul. And please explain why this "autocentric sense" cannot be copied in a perfect copy. > The label "you" implies "over there". Me implies "here". But as I have said before and will continue saying, if the two are identical and you exchange "here" for "over there" even the very universe itself will not notice any difference, and remember that both you standing here and that fellow over there are also part of the universe and you'd be no better detecting that exchange than any other part of the universe. And as I have also said before this is not just some skittering abstraction but the bedrock behind one of the most important ideas in modern physics, exchange forces. > You don't feel like a different person by moving from one spatial coordinate to another because the reference frame moves with you So if I give you general anesthesia, put you on a jet to a undisclosed location and then wake you up Stuart LaForge will be dead and there will just be an impostor who looks behaves thinks and believes with every fibre of his being that he is Stuart LaForge > > The autocentric sense does not track your absolute position in space, there is > no such thing, but your position relative to external objects including any > copies of you that may be around. But just what is "your position"? If you are reaching down into a deep dark hole trying to manipulate something by feel your position is the tips of your fingers, if you're using a remote control device in Ohio to operate a robot defusing a bomb on the great wall of China then you're position is in the orient, and your position is almost never inside a container made of bone. > And regardless of your autocentric sense, you have a physical position and associated reference frame relative to the fixed stars. Without your senses there is no way to even know where your brain is, so I sure don't see how it could have anything to do with consciousness or identity. For most of human history people thought the brain was an unimportant organ that had something to do with cooling the blood and the heart was the seat of consciousness; even though those ancient people literally didn't know where they were I still think they were conscious. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sun Jan 2 19:27:33 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 2 Jan 2011 13:27:33 -0600 Subject: [ExI] "Transcending the Human, DIY Style" Message-ID: DIY transhumanism is super important to me, so hopefully I'd like to help put the Wired article (with the pseudonymed person @lepht_anonym) into perspective and share with the DIYbio community. DIY transhumanists are not in it for the shock value. Maybe we'll get Todd or Quinn to send a short blurb about their neodymium implant surgeries-- nobody in DIY transhumanism (or DIYbio) is advocating unnecessary pain or body shock. We're in it for human enhancement, synthetic biologies, longevity, nootropics, software, prosthetics, tech development, yes even implants. Transhumanism itself refers to a whole philosophy of self-transformation and human enhancement. Genspace wasn't established in a month and BSL1 rating didn't fall from the sky. Biocurious, under the direction of Joseph Jackson (who is by no coincidence also a transhumanist), is on its way for sure. OK, so neodymium implants, ooh body shock. Not a huge deal, I agree-- didn't appreciate the pairing of "neodymium and pain"-- sane and reasonable people have done that before, again not a huge deal... it's like a less bioartsy Stelarc or something. The other thing that caught my eye was when @lepht_anonym did a battery-powered Northpaw implant, which was not featured in the Wired article. As it would inevitably happen, the batteries died, and @lepht_anonym had to cut virself open again. A little foresight, planning, device design would have prevented this. It wasn't a shining example of an implant project, IMHO. Other issues have caught my eye a few times that lead me to believe that @lepht_anonym is all around a liability to our communities: http://sapiensanonym.blogspot.com/2010/08/das-update.html ... "i have been known to slice my arms open for shits'n'giggles, sure, and do a fair amount of damage in the process (none of this emo cat-scratch bullshit, i've split my arm to the tendons like the little psychopath i sort of am), but this is not something i do when properly medicated. i need a better way of communicating that." Here's more 'body shock' or 'internet shock': Screaming-in-pain video "homebrew neodymium node insertion by Lepht Anonym" http://www.youtube.com/watch?v=yDIp_VzmRtg .. which she has taken down according to YouTube. Consensus says... liability. I suspect Biocurious will prove to be a strong player in DIY transhumanism in the near future. Along with Humanity+ there is growing support for transhumanists but at this point in time I think DIY transhumanist project participants need to take their work more seriously than @lepht_anonym has demonstrated for her own. Anyone interested in my overview talk of DIY transhumanism can watch my talk from H+ Summit 2010 @ Harvard: part 1: http://www.youtube.com/watch?v=i4ex52LYDe8 part 2: http://www.youtube.com/watch?v=hzUVd0skbc8 Humanity+ takes DIY transhumanism seriously and wants it to succeed in the best possible ways. You can become a supporting member by joining , which includes certain privileges like voting for DIY transhumanists on board elections. Another good way to get involved and put right in the middle of the action is the mailing list, which you can subscribe to over here: http://www.transhumanism.org/mailman/listinfo/wta-talk (There's also archives going back to 2003; the diybiogroup is better for getting in on the ground floor though :-).) """ This list is for all members of the Humanity+ (H+) to discuss topics relevant to transhumanism, and the activities of H+. Transhumanism is an interdisciplinary approach to understanding and evaluating the possibilities for overcoming biological limitations through technological progress. Transhumanists seek to expand technological opportunities to live longer and healthier lives and to enhance their intellectual, physical, and emotional capacities. Humanity+ (formerly the World Transhumanist Association) is a nonprofit membership organization which works to promote discussion and development of the possibilities for radical improvement of human capacities using genetic, cybernetic and nano technologies. H+ is now growing faster than ever, and we invite you to join us in this important work. In addition to wta-talk, you may also enroll in one of our discussion lists and join one of our local H+ chapters, which can be found in countries and languages all over the world. """ Humanity+ has published a few of @lepht_anonym's articles in H+ Magazine, if anyone wants to read that. http://hplusmagazine.com/articles/enhanced/scrapheap-transhumanism .. or the magazine in general: http://hplusmagazine.com/ lepht anonym blog http://sapiensanonym.blogspot.com/ things i do for biohacking, part 1 http://sapiensanonym.blogspot.com/2010/08/things-i-do-for-biohacking-part-1.html things i do for biohacking, part 2 http://sapiensanonym.blogspot.com/2010/08/things-i-do-for-biohacking-part-2.html I think it would be helpful if Todd Huffman or Quinn Norton would pipe up with their experiences and how their implant procedures differed. - Bryan http://heybryan.org/ 1 512 203 0507 Assoc. Director of R&D, Humanity+ http://www.humanityplus.org/ ---------- Forwarded message ---------- From: Natasha Vita-More Date: Sun, Jan 2, 2011 at 11:05 AM Subject: RE: [ExtroBritannia] Transcending the Human, DIY Style To: extrobritannia at yahoogroups.com Cc: Bryan Bishop All of me is unhappy that there is someone pushing bad use of DIY for his/her/its own recognition under the name of transhumanism. This person is not using DIY bio effectively or smartly. She may possibly be a "cutter" and lacking in any medical knowledge of batteries and the fact that the battery will have to be removed, and she can easily have issues with rejection. (Stelarc has a major problem with his implant, and that implant was his own tissue!). I have had implants as a biological artist (bioartist doing biology) and believe me, they are not fun or easy and are very painful and my own body rejected the implant and I had months of pain and multiple antibiotics to deal with my immune system and allergic reaction to the medicine. Anyone working in DIY bio needs to have a little medical background. Anyone working in DIY transhumanism bio needs to know what transhumanism means and they DIY bio, if transhumanist in scope, is based on more than just putting things in our bodies. Natasha Natasha Vita-More -----Original Message----- From: extrobritannia at yahoogroups.com [mailto:extrobritannia at yahoogroups.com] On Behalf Of estropico Sent: Saturday, January 01, 2011 3:21 PM To: extrobritannia Subject: [ExtroBritannia] Transcending the Human, DIY Style Transcending the Human, DIY Style http://www.wired.com/threatlevel/2010/12/transcending-the-human-diy-style/ Well... part of me is happy that there's somebody out there pushing the envelope. Then again I can't help wondering whether this isn't just another form of fetishism that just happens to overlap with our desire to be more than human. More worryingly, this type of biohacking seems to have an obvious downside that makes me simply shudder, as a life-extensionist: "The medical consequences can be both severe and likely to elicit hostility from doctors. She's put herself in the hospital several times. She nearly lost a fingertip the first time she tried to implant a neodymium disc herself. Various experiments with bioproofing have failed, with implants rusting under her skin, or her own self-surgeries turning septic." All this to know where North is of feel magnetic fields (see article)!? Thanks but no thanks.... Cheers, Fabio -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sun Jan 2 20:56:53 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 2 Jan 2011 14:56:53 -0600 Subject: [ExI] "Transcending the Human, DIY Style" In-Reply-To: References: Message-ID: On Sun, Jan 2, 2011 at 1:27 PM, Bryan Bishop wrote: > DIY transhumanism is super important to me, so hopefully I'd like to help > put the Wired article (with the pseudonymed person @lepht_anonym) into > perspective and share with the DIYbio community. ---------- Forwarded message ---------- Date: Sun, 2 Jan 2011 12:47:24 From: Eric Boyd To: "cyborg at lists.noisebridge.net" , "Bryan Bishop" Subject: [Cyborg] Fwd: [Body Hacking] "Transcending the Human, DIY Style" Response (see forwarded conversation below) to Lepht's Wired article on the body hacking list. This was also cross-posted to like half of the transhumanist world. Thought you guys might want to see it. Despite comments below, my understanding is that Lepht has never actually implanted the North Paw (or anything approaching it). She's talked about it, and I think she did once implanted a motor, but she discovered (unsurprisingly) that transdermal implants are very difficult to take care of. She has plans to make a super-small version using neuroelectrodes and induction power transfer, but she lacks the electrical engineering skills to push that project forward on her own. So either she gets help, or it's going to be a slow project as she learns those skills... If I was Lepht, it'd focus first on making a wearable version of the electrode-based North Paw. Once she's got it wearable and the code all nice and cleaned up, and wireless reprogramming working, only then it is even remotely thinkable to implant it. I'd probably also prototype a little first: maybe a single implanted neuroelectrode/induction pad, using a PWM signal for North. If that holds up for a bit, then go for the 8-electrode version... frankly I'm glad I'm not not Lepht, because pain and blood scare the shit out of me. I am of course a strong advocate of "DIY transhumanism" myself. My personal angle of approach is wearable electronics. I think it's a very approachable path to transhumanism, and totally reachable from a hobbyist level right now. I think there is enough risk in the wearables stuff: our understanding of brain plasticity is pretty weak. One of the reasons that I did the North Paw project is that I wanted to know if e.g. the withdrawal symptoms from wearing such a device would be significant. They are not, but I didn't know that before I began, and that's what risk means... Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sun Jan 2 21:16:10 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 2 Jan 2011 15:16:10 -0600 Subject: [ExI] [wta-talk] Archives? In-Reply-To: <905582.91637.qm@web32201.mail.mud.yahoo.com> References: <905582.91637.qm@web32201.mail.mud.yahoo.com> Message-ID: On Sun, Jan 2, 2011 at 3:06 PM, juan meridalva wrote: > Can somebody tell me if we have WTA-TALK Archives. How to access them? Hey Juan, Sorry about that. I've temporarily screwed that up. In the mean time, here are the links: http://www.transhumanism.org/mailman/private/wta-talk/2003-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2010-December/thread.html ordered by months: http://www.transhumanism.org/mailman/private/wta-talk/2003-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-January/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-Feburary/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-March/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-April/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-May/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-June/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-July/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-August/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-September/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-October/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-November/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2003-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2004-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2005-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2006-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2007-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2008-December/thread.html http://www.transhumanism.org/mailman/private/wta-talk/2009-December/thread.html For the sister-list, extropy-chat, there's these: http://lists.extropy.org/pipermail/extropy-chat/2003-October/thread.html http://lists.extropy.org/pipermail/extropy-chat/2003-November/thread.html http://lists.extropy.org/pipermail/extropy-chat/2003-December/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-January/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-Feburary/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-March/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-April/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-May/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-June/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-July/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-August/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-September/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-October/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-November/thread.html http://lists.extropy.org/pipermail/extropy-chat/2004-December/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-January/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-Feburary/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-March/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-April/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-May/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-June/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-July/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-August/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-September/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-October/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-November/thread.html http://lists.extropy.org/pipermail/extropy-chat/2005-December/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-January/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-Feburary/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-March/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-April/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-May/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-June/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-July/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-August/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-September/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-October/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-November/thread.html http://lists.extropy.org/pipermail/extropy-chat/2006-December/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-January/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-Feburary/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-March/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-April/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-May/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-June/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-July/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-August/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-September/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-October/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-November/thread.html http://lists.extropy.org/pipermail/extropy-chat/2007-December/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-January/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-Feburary/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-March/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-April/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-May/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-June/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-July/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-August/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-September/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-October/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-November/thread.html http://lists.extropy.org/pipermail/extropy-chat/2008-December/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-January/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-Feburary/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-March/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-April/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-May/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-June/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-July/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-August/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-September/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-October/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-November/thread.html http://lists.extropy.org/pipermail/extropy-chat/2009-December/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-January/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-Feburary/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-March/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-April/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-May/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-June/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-July/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-August/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-September/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-October/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-November/thread.html http://lists.extropy.org/pipermail/extropy-chat/2010-December/thread.html Some day we should do a full/proper mailbox-based backup. - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sun Jan 2 21:39:49 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 2 Jan 2011 15:39:49 -0600 Subject: [ExI] FW: [ExtroBritannia] Transcending the Human, DIY Style Message-ID: <59DB5BF891364462AF54AC701C3DFA36@DFC68LF1> Lepht is is not using DIY bio effectively or smartly. She may possibly be a "cutter" and lacking in any medical knowledge of batteries and the fact that the battery will have to be removed, and she can easily have issues with rejection. (Stelarc has a major problem with his implant, and that implant was his own tissue!). I have applied add-on and add-in adjustments to my physiology as a biological artist (bioartist doing biology) and believe me, they are not fun or easy and are very painful and my own body rejected the elements and I had months of pain and multiple antibiotics to deal with my immune system and allergic reaction to the medicine. Anyone working in DIY bio needs to have a little medical background. Anyone working in DIY transhumanism bio needs to know what transhumanism means and they DIY bio, if transhumanist in scope, is based on more than just putting things in our bodies. Natasha Natasha Vita-More From kanzure at gmail.com Sun Jan 2 22:24:57 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 2 Jan 2011 16:24:57 -0600 Subject: [ExI] Fwd: "Transcending the Human, DIY Style" In-Reply-To: <20110102231047.3ea71e36@irregularity> References: <20110102231047.3ea71e36@irregularity> Message-ID: ---------- Forwarded message ---------- From: phryk Date: Sun, Jan 2, 2011 at 4:10 PM Subject: Re: [Body Hacking] "Transcending the Human, DIY Style" To: bodyhacking at lists.caughq.org Well, I'm not Quinn or Todd, but I can say that the insertion of a neodymium implant is virtually painless, if done in a professional environment. It also doesn't affect me when typing stuff. Since I'm pretty squeamish, I was scared shitless before the procedure was done. I consulted Lepht before having it done, and the description did not give me any better feeling (wording was something alike to "They hurt like a fucker going in"), but since I wanted it real badly I set a date for the implantation and went ahead. When it was done, I was positively surprised as to how easy, painless and short the procedure was. It was simply opening the finger, putting the implant in and stitching it. I could use my laptop keyboard on the same day, not using the finger that got the implant. After about three days I slowly started to use this finger again, and after 10 days, I could pull the strings and type completely normal. All in all, I'd do it again, (and propably will, since I got a positive answer regarding spacial perception with multiple implants) and recommend it to anyone interested in this. Only con is the price, I paid 200? for mine, but normal price should be around 150? I think. Since I don't really see any value in money, I'm not minding that much, though ;) Greetings, phryk -- http://phryk.net - eating your babies since 2009 -----BEGIN GEEK CODE BLOCK----- Version: 3.1 GCM/S d--@ s+: a-- C++++ UL+++>++++ UB+>++++ P+>+++ W+++ w--- PS+++ Y+ tv-- b++ ------END GEEK CODE BLOCK------ -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubkin at unreasonable.com Sun Jan 2 23:45:24 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sun, 02 Jan 2011 18:45:24 -0500 Subject: [ExI] Asimov's 90th today Message-ID: <201101030033.p030XVOE001258@andromeda.ziaspace.com> http://twitter.com/#!/Disalmanac/statuses/21642302267068416 >Today in 1920, Isaac Asimov was born. His 3 Laws of Robotics: 1) Do a little >dance 2) Make a little love 3) Get down tonight. -- David. From spike66 at att.net Mon Jan 3 01:07:43 2011 From: spike66 at att.net (spike) Date: Sun, 2 Jan 2011 17:07:43 -0800 Subject: [ExI] Asimov's 90th today In-Reply-To: <201101030033.p030XVOE001258@andromeda.ziaspace.com> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> Message-ID: <002a01cbaae2$a099dcc0$e1cd9640$@att.net> ... On Behalf Of David Lubkin Sent: Sunday, January 02, 2011 3:45 PM Subject: [ExI] Asimov's 90th today http://twitter.com/#!/Disalmanac/statuses/21642302267068416 >Today in 1920, Isaac Asimov was born. His 3 Laws of Robotics: 1) Do a >little dance 2) Make a little love 3) Get down tonight. -- David. Happy 91st birthday Dr. Asimov! I know what I was doing 21 years ago today. I was on business travel in Colorado Springs, arrived early in the day enough to go looking around the used book stores. I was trying to complete a collection of Asimov's nonfiction essays from Fact and Science Fiction magazine. The older ones were hard to find, and I had been searching for a long time, five years. On 2 January 1990, I found the last two to complete that collection, coincidentally on the 70th year from when Asimov celebrated his birth (they don't know exactly when he was born because of uncertainty in post-commie revolution in Russia and differences in calendars between Russia and the US.) I posted to him, but he was already too sick at that time to have much interest in communicating with his fan base. I ended up naming my son after him. spike From thespike at satx.rr.com Mon Jan 3 01:44:26 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 02 Jan 2011 19:44:26 -0600 Subject: [ExI] Asimov's 90th today In-Reply-To: <201101030033.p030XVOE001258@andromeda.ziaspace.com> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> Message-ID: <4D2129FA.6080005@satx.rr.com> On 1/2/2011 5:45 PM, David Lubkin fwd'd: > Today in 1920, Isaac Asimov was born. His 3 Laws of Robotics: 1) Do a > little dance 2) Make a little love 3) Get down tonight. That would be 3) Get down another 10 chapters by tonight. From lubkin at unreasonable.com Mon Jan 3 01:58:43 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sun, 02 Jan 2011 20:58:43 -0500 Subject: [ExI] Asimov's 90th today In-Reply-To: <002a01cbaae2$a099dcc0$e1cd9640$@att.net> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> Message-ID: <201101030158.p031wnnK002458@andromeda.ziaspace.com> Spike wrote: >I posted to him, but he was already too sick at that time to have much >interest in communicating with his fan base. I ended up naming my son after >him. Back in college, I chaired two sf conventions. The Guest of Honor of one was Fred Pohl; the Fan Guest of Honor was Devra Langsam. I secretly called up their best friends and asked them to write tributes for the con book. Fred Pohl's best friend was Isaac, one of the two people he would admit was smarter than he. (The other was Minsky.) He was delighted to have a chance to praise Fred. I know a few women (including my mother) who he wrote dirty limericks about. They all took every opportunity to recite their limerick. The most challenging of these for Isaac was Audrey Likely, PR director for the American Institute of Physics. He was proud of rhyming Audrey with bawdry and tawdry. -- David. From sjatkins at mac.com Mon Jan 3 02:08:11 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 02 Jan 2011 18:08:11 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: References: Message-ID: <6C190FB4-8D78-4858-B8CB-D536F2C666EF@mac.com> On Jan 1, 2011, at 5:39 PM, Keith Henson wrote: > On Sat, Jan 1, 2011 at 1:19 PM, Samantha Atkins wrote: > >> On Jan 1, 2011, at 2:40 AM, Keith Henson wrote: >> >>> On Fri, Dec 31, 2010 at 11:07 PM, Samantha Atkins wrote: > >>>> >>>>> Based on >>>>> Jordin Kare's work, this takes a flotilla of mirrors in GEO. Current >>>>> space technology is good enough to keep the pointing error down to .7 >>>>> meters at that distance while tracking the vehicle. The lasers don't >>>>> need to be on the equator so they can be placed where there is grid >>>>> power. They need to be 30-40 deg to the east of the lunch point. >>>>> >>>> >>>> Uh huh. What is the max distance you are speaking of? >>> >>> Around one sixth of the circumference 40,000/6, 6,666 km. >> >> That amounts to about 0.002 MOA tracking a rocket through atmosphere. > > MOA? > Miinute of arc. >> If we can do that then we can shoot down any old missile, any time with perfect accuracy. > > The possibility of the laser beam going off target for some reason is > why you want a long path to the east over water. > > But yes, this transport method does have some rather obvious military > applications. A 6 GW laser beam delivers the energy of 1.5 tons of > TNT per second. > What I was attempting to point out is that we obviously do not have this kind of ability today nor, as far as a quick scan showed, is it expected any time soon. > snip > >> The current record for a small test vehicle climbing an admittedly low power beam is measured in the hundreds of feet. > > The ones that have gone up a few hundred feed are not related at all > to this kind of setup. They only work in the atmosphere. This works > best outside. > But you have a very long tracking path in conditions containing many possible sorts of turbulent and perturbation from ideal paths. One of the challenges of the earth side experiment was in dealing with these to keep the beam properly centered. I expect that even only using laser propulsion starting around 300 km up would still have some such issues. >> A power beam that strong would bring issues of whether it would propel or melt the nozzles. If the beam got a bit off center then it could be a real danger to the rocket itself which presumably is not of a high melting point alloy such as the nozzles would be. > > The current thoughts on the design has the laser beam going through a > sapphire window filled with cold flowing 10-20 bar hydrogen. 6 GW > sounds like a lot, but it is absorbed over close to 1000 square > meters. So I am tracking a target approximately 32 m across up to 6000 km away with a mirror system that moves according to where the target "should" be rather than where it perhaps actually is due to the unexpected and/or incalculable. One nice thing about these big laser beams to orbit is they could accidentally or on purpose de-orbit a lot of space junk that has accumulated there. :) Hmmm. Perhaps a decent feasibility test is to do some such target shooting. Of course there would me a major international uproar over such. > So that's 6 MW per square meter. That's in the range of what > happens inside the fire box of a coal fired power plant. Thought > about on a smaller scale, it's 600 W per square cm. It's not hard to > imagine a 1 cm square hole dumping 600 watts of heat into a flowing > stream of hydrogen and heating the gas to 3000 deg K. Regen cooling > keeps the nozzle from getting too hot. > Yes, I believe that part can work in principle. I am worried by the required accuracy under real conditions though. >> The aiming is by no means trivial. > > I didn't mean to give the impression it was. However, the pointing > accuracy of Hubble is less than a meter from GEO to the trajectory > path. Tracking is slow traversing about 8 deg in 900 sec. > >> Nor is the amount of power needed by the lasers. > > It's a huge consideration. At 50% overall, the grid draw would be 12 > GW. On the other hand, Three Gorges is 22 GW. So if the average SBSS produces 5 GW it will take nearly all the output of three of them to run this sort of launch pattern. Is the 12 GW the minimum necessary for using this type of launch on this size of payload? The answer changes the payoff and initial cost times considerably. > >> How do the orbital mirrors station keep reflecting that intense a power beam? > > It's not particularly intense. The mirrors in GEO are 30 meters across. How much of the power beam is hitting each one? > >> What is the required station keeping and mirror adjustment speed? > > You can compensate for the light pressure by orbiting 4 km inside GEO. > Tracking is as above, slow. Slow tracking gives no room for any perturbations in flight path, right? Using ablative laser launch there almost certainly will be perturbations. You are boiling off material which changes the effective beam strength in what seems to me a rather chaotic roiling pattern. But the description of the window etc above may be of help in handling this problem by directing the superheated hydrogen. It would be great to see a ground based demonstration of such an engine in a controlled smaller environment. One of the objectives would be understanding the likely turbulence. > >> What kind of lasers do you have in mind for this application. This site, http://www.rp-photonics.com/high_power_lasers.html, doesn't lead me thing multi GW lasers are particularly straightforward especially no for such sustained high precision power levels. > > I don't understand why you think high precision power levels are required. You don't? You want a combined 6 GW I believe. Even if you use a lot of lower powered lasers you have spread the problem out over many many beams to arrive on target. How many beams are you thinking of? > >> The most powerful ground based lasers I could find were anti-missile lasers that seemed to top out at 10 MW or so. These were not atmosphere compensated. How much power will you lose to atmosphere compensation? I understand thus far that atmospheric self-focusing only works in narrow power ranges defined by the type of laser used, atmospheric conditions and amount of atmosphere to be traversed. All of this doesn't lead me to belief this is so straightforward. > > I really don't like arguments from authority, but Dr. Jordin Kare > http://en.wikipedia.org/wiki/Jordin_Kare knows far more about this > than I do. However, the proposal does not use power levels where you > get atmospheric distortions. Clouds at the laser end will be a > problem. > What I can find from Jordin Kare hasn't set my mind at ease on these questions. - samantha From spike66 at att.net Mon Jan 3 02:03:03 2011 From: spike66 at att.net (spike) Date: Sun, 2 Jan 2011 18:03:03 -0800 Subject: [ExI] Asimov's 90th today In-Reply-To: <201101030158.p031wnnK002458@andromeda.ziaspace.com> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> Message-ID: <003401cbaaea$5bc075c0$13416140$@att.net> -----Original Message-----On Behalf Of David Lubkin ... I know a few women (including my mother) who he wrote dirty limericks about. They all took every opportunity to recite their limerick...-- David. OK so do relate your mother's limerick. Pleeeeease? {8-] s From technologiclee at gmail.com Mon Jan 3 01:53:38 2011 From: technologiclee at gmail.com (Lee Nelson) Date: Sun, 2 Jan 2011 19:53:38 -0600 Subject: [ExI] Broken Cyborg (was Re: "Transcending the Human, DIY Style") Message-ID: > We're in it for human enhancement, synthetic biologies, longevity, > nootropics, software, *prosthetics*, > As you can see from the x-rays (most clearly in the fourth picture), I have a titanium full knee replacement with a broken rod that takes the place of a tibia. http://picasaweb.google.com/technologiclee/ProstheticImplantXRays# *History:* This started as an ostogenic sarcoma in the tibia at age 15. http://www.google.com/search?q=ostogenic+sarcoma There was a year of chemotherapy. The knee was replaced with a cadaver bone that was shattered a year later. The knee was then replaced by the hardware shown in the x-rays. I had good mobility for about 10 years. Then one day the rod just broke in mid step. This is probably due to metal fatigue. The rod has been broken for over two years now. *My options:* I have not found a doctor that is interested in attempting to replace this hardware yet. Maybe my new health care will bring help this year. At the time of the surgery this was a fairly new procedure. Since then it has become more commonplace, but it is still a fairly complex surgery. There would be no guarantee of improved mobility and the possibility of infection, amputation and or death. Honestly, I do not think that the procedures currently available are what I want. I would like something more 'natural', like having a replacement knee printed by a bone printer and installed. There is already nerve, muscle and skin damage, so for an ideal solution, stem cells would be used to regenerate the damaged tissues. http://www.google.com/search?q=bone+printer What would be better is to augment the knee with robotics internally or as an external brace, like they are working on at MIT Leg Lab. http://www.ai.mit.edu/projects/leglab/ *Best Solution:* Now that the future is here, the hardware and software for 'medical nanobots' is almost ready. What would this entail? Removing the hardware. Directing stem cells to sites of damage and allow them to organize into appropriate tissues. This means that a 3-D model of the leg would be made and a set of instructions prepared to direct the robots. This is the next 'Killer App'. I would like for this to be the start of a thread about medical hardware and software in terms of radical reconstruction. What is the best medical and machine control software available to base this off of? For a start there is EMC machine control software and Google Body. http://www.google.com/search?q=medical+nanobot http://www.google.com/search?q=EMC+machine+control http://www.google.com/search?q=google+body Do you remember the 'tissue processing' scene from the movie The Fifth Element? Watch the hardware and software in this clip. This is the goal. This is something that is coming together from every corner of research. http://en.vidivodo.com/183936/the-fifth-element-part-1 *Trans-dermal Implant:* During the chemotherapy I had a "*port* (or *portacath*)" installed and later removed. http://en.wikipedia.org/wiki/Port_%28medical%29 In terms of size and considering modern electronics, that is enough volume to place something like a low power processor. http://www.google.com/search?q=low+power+implantable+computer http://www.google.com/search?q=implantable+computer *P.S.* Did you see the new spray on skin technique?? http://www.google.com/search?q=spray+on+skin -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 3 02:19:32 2011 From: spike66 at att.net (spike) Date: Sun, 2 Jan 2011 18:19:32 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: <6C190FB4-8D78-4858-B8CB-D536F2C666EF@mac.com> References: <6C190FB4-8D78-4858-B8CB-D536F2C666EF@mac.com> Message-ID: <003601cbaaec$a9261480$fb723d80$@att.net> ... On Behalf Of Samantha Atkins ... >...One nice thing about these big laser beams to orbit is they could accidentally or on purpose de-orbit a lot of space junk that has accumulated there. :) I don't follow you there. Suppose you hit a dead satellite with a laser. Now you may cause some pieces to break off, but that doesn't actually deorbit anything. >... Hmmm. Perhaps a decent feasibility test is to do some such target shooting... We have the control systems adequate to do this now. But it isn't clear to me what would be accomplished, other than create a huge mess of orbiting debris. >... Of course there would me a major international uproar over such... Ja I can imagine that. We currently have a big problem with our LEO satellites gradually losing power from solar panels getting pitted by 10 to 100 micron class debris. Shooting dead satellites with a laser would make that problem waaay worse I fear, and the US has more to lose than anyone. spike From atymes at gmail.com Mon Jan 3 02:03:36 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 2 Jan 2011 18:03:36 -0800 Subject: [ExI] simulation as an improvement over reality In-Reply-To: References: <005e01cba2d7$3fa8da50$befa8ef0$@att.net> <006701cba2f2$31cf0fb0$956d2f10$@att.net> <20101224112644.GH16518@leitl.org> <5C4D53C6-00A9-45BC-93D5-B5055D719E7A@bellsouth.net> <001e01cba472$b3ea60e0$1bbf22a0$@att.net> <002c01cba477$b1277e10$13767a30$@att.net> <000f01cba4b9$30e74900$92b5db00$@att.net> <20101228130001.GG16518@leitl.org> <7AF4033D-D954-48FF-865B-43815AF70B47@mac.com> Message-ID: On Thu, Dec 30, 2010 at 4:15 AM, Samantha Atkins wrote: > Don't land them or not most of them! ?Much of the volatiles and other materials are needed in space or on the moon. ?There eventual money making potential is much larger there. ?Send rare earths, precious metals and so on down to the surface to raise more money faster but keep much of the rest for building out near earth infrastructure. Yeah, see, this is the main problem I have to overcome a lot when pitching this idea. 1) Whatever the eventual money earning potential is, there is none today. 2) The initial stages are likely to be extremely cash-strapped. 3) We can always go get more asteroids. (At least, for as long out as it makes sense to plan in much detail.) Therefore: Land all or most of the initial rock. Anything you can sell for much profit Earth-side, plan on doing so. Only the least valuable stuff - which may still be quite a lot - remains up there. But do land less of it later on, once cash is no longer in such extremely short supply. > Except you don't have enough trained humans to do space walk work. Training more humans is a trivial expense, relative to the expense of the rest of this. As has been suggested by others, oil rig workers might perform well. From lubkin at unreasonable.com Mon Jan 3 02:43:01 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sun, 02 Jan 2011 21:43:01 -0500 Subject: [ExI] Asimov's 90th today In-Reply-To: <003401cbaaea$5bc075c0$13416140$@att.net> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> <003401cbaaea$5bc075c0$13416140$@att.net> Message-ID: <201101030243.p032h1Hc028321@andromeda.ziaspace.com> Spike wrote: >OK so do relate your mother's limerick. Pleeeeease? {8-] Sorry, no. Now that she's retired from Physics Today, she's working on her memoirs from forty years in the forefront of science journalism, and it's going there. I've suggested that, since she's used to the rhythm of issue deadlines, I set up a site for her where she could post each piece as she'd written it. Then we could spread the word, and maybe draft some of her Laureate pals to add their comments or guest-author. It would build a buzz for the book, keep her impetus going, and potentially be fun or lucrative by itself. But she's still in the mindset of writing for paper. I'm building a set of Drupal web sites though, including a blog or two for me, and maybe seeing them up will start changing her mind. -- David. From js_exi at gnolls.org Mon Jan 3 07:52:04 2011 From: js_exi at gnolls.org (J. Stanton) Date: Sun, 02 Jan 2011 23:52:04 -0800 Subject: [ExI] Another technical paper for the CR/life extension crowd Message-ID: <4D218024.8000007@gnolls.org> We're starting to understand how CR actually extends life. Here is a very recent paper (November 2010): Sirt3 Mediates Reduction of Oxidative Damage and Prevention of Age-Related Hearing Loss under Caloric Restriction. Shinichi Someya, Wei Yu, William C. Hallows, Jinze Xu, James M. Vann, Christiaan Leeuwenburgh, Masaru Tanokura, John M. Denu, Tomas A. Prolla. Cell - 24 November 2010 (Vol. 143, Issue 5, pp. 802-812 http://www.cell.com/abstract/S0092-8674%2810%2901138-4 "...Here, we report that CR reduces oxidative DNA damage in multiple tissues and prevents AHL in wild-type mice but fails to modify these phenotypes in mice lacking the mitochondrial deacetylase Sirt3, a member of the sirtuin family. In response to CR, Sirt3 directly deacetylates and activates mitochondrial isocitrate dehydrogenase 2 (Idh2), leading to increased NADPH levels and an increased ratio of reduced-to-oxidized glutathione in mitochondria. In cultured cells, overexpression of Sirt3 and/or Idh2 increases NADPH levels and protects from oxidative stress-induced cell death. Therefore, our findings identify Sirt3 as an essential player in enhancing the mitochondrial glutathione antioxidant defense system during CR and suggest that Sirt3-dependent mitochondrial adaptations may be a central mechanism of aging retardation in mammals." This is plausible, because we already know that overexpression of SIRT3 is associated with longevity in humans: A novel VNTR enhancer within the SIRT3 gene, a human homologue of SIR2, is associated with survival at oldest ages http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WG1-4F14YRG-1&_user=10&_coverDate=02%2F28%2F2005&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=a429c5894fbe3f3ca905a606ae7efa90&searchtype=a "First, we searched for variability in the human sirtuin 3 gene (SIRT3) and discovered a VNTR polymorphism (72-bp repeat core) in intron 5. The alleles differed both for the number of repeats and for presence/absence of potential regulatory sites. Second, by transient transfection experiments, we demonstrated that the VNTR region has an allele-specific enhancer activity. Third, by analyzing allele frequencies as a function of age in a sample of 945 individuals (20?106 years), we found that the allele completely lacking enhancer activity is virtually absent in males older than 90 years. Thus the underexpression of a human sirtuin gene seems to be detrimental for longevity as it occurs in model organisms." JS http://www.gnolls.org From eugen at leitl.org Mon Jan 3 08:40:22 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 3 Jan 2011 09:40:22 +0100 Subject: [ExI] Spacecraft (was MM) In-Reply-To: <003601cbaaec$a9261480$fb723d80$@att.net> References: <6C190FB4-8D78-4858-B8CB-D536F2C666EF@mac.com> <003601cbaaec$a9261480$fb723d80$@att.net> Message-ID: <20110103084022.GH16518@leitl.org> On Sun, Jan 02, 2011 at 06:19:32PM -0800, spike wrote: > > ... On Behalf Of Samantha Atkins > ... > > >...One nice thing about these big laser beams to orbit is they could > accidentally or on purpose de-orbit a lot of space junk that has accumulated > there. :) > > I don't follow you there. Suppose you hit a dead satellite with a laser. > Now you may cause some pieces to break off, but that doesn't actually > deorbit anything. Presumably, the laser would make the target hot enough to cause asymmetric ablation, which can be used to lower the orbit until it decays spontaneously. It's admittedly far more likely to break the thing up into multiple debris, and thus exacerbate subsequent cleanup (IIRC there are plans to use a carbon mesh on a tug to collect debris). Wonder who's going to pony up the cleanup costs? > >... Hmmm. Perhaps a decent feasibility test is to do some such target > shooting... > > We have the control systems adequate to do this now. But it isn't clear to > me what would be accomplished, other than create a huge mess of orbiting > debris. It'd be better using a maglev launch to ~Mach 1 or higher at 6 km height and subsequent tracking by a battery of multiple 1 MW lasers. If that doesn't work, you can still use a conventional rocket, or a scramjet/rocket hybrid. > >... Of course there would me a major international uproar over such... > > Ja I can imagine that. We currently have a big problem with our LEO > satellites gradually losing power from solar panels getting pitted by 10 to > 100 micron class debris. Shooting dead satellites with a laser would make > that problem waaay worse I fear, and the US has more to lose than anyone. The amount of debris out there is getting out of hand. IIRC the recent space plane got hit by at least 9. If this continues, this will impact manned spaceflight and extravehicular activities especially. You'd also need armoring, which will increase weight. Some orbits, or whole orbit ranges would become off-limits. And people could start salting orbits with tungsten or uranium balls as celestial terrain denial. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sjatkins at mac.com Mon Jan 3 11:50:27 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 03 Jan 2011 03:50:27 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: <003601cbaaec$a9261480$fb723d80$@att.net> References: <6C190FB4-8D78-4858-B8CB-D536F2C666EF@mac.com> <003601cbaaec$a9261480$fb723d80$@att.net> Message-ID: <1DD922F7-54D3-4E42-A7DC-4FBBDC646231@mac.com> On Jan 2, 2011, at 6:19 PM, spike wrote: > > ... On Behalf Of Samantha Atkins > ... > >> ...One nice thing about these big laser beams to orbit is they could > accidentally or on purpose de-orbit a lot of space junk that has accumulated > there. :) > > I don't follow you there. Suppose you hit a dead satellite with a laser. > Now you may cause some pieces to break off, but that doesn't actually > deorbit anything. > I was toying idly with the notion that some launch laser systems might be powerful enough to burn up or melt some space junk. Not the best of ideas. :) - s From stefano.vaj at gmail.com Mon Jan 3 15:11:12 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 3 Jan 2011 16:11:12 +0100 Subject: [ExI] Von Neumann probes for what? In-Reply-To: <51E1B34A-5562-484F-BA6C-B74479873D34@mac.com> References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <20110101170536.GD16518@leitl.org> <51E1B34A-5562-484F-BA6C-B74479873D34@mac.com> Message-ID: On 1 January 2011 19:03, Samantha Atkins wrote: > Not so or I don't see why it would be so and the probe have any real capacity to do any good for the originating civ. ?A mere cosmic yeast mold is not terribly useful to anyone unless you like cosmic yeast. Unless you *are* the cosmic yeast. The only problematic aspect is the jump from a biological species to a Neumann probe structure. Then Darwinian mechanisms kick in, the program being the way the replicators get replicated, not the replicators the way to diffuse the program... -- Stefano Vaj From jonkc at bellsouth.net Mon Jan 3 15:51:48 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 3 Jan 2011 10:51:48 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <20110101182448.GI16518@leitl.org> Message-ID: On Jan 1, 2011, at 3:19 PM, Mike Dougherty wrote: > If I make a Perfect Copy(tm) then throw it down a black hole, are we still synch'd? They'd be about as well synched as clocks would be synched if you blew up one with a stick of dynamite but not the other. I don't see the point you were trying to make and I sure don't see why you brought up something as exotic as Black Holes into the discussion. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Jan 3 17:30:06 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 03 Jan 2011 18:30:06 +0100 Subject: [ExI] Singletons In-Reply-To: <20110101164211.GA16518@leitl.org> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> <20110101164211.GA16518@leitl.org> Message-ID: <4D22079E.2050201@aleph.se> Eugen Leitl wrote: > The problem that the local rules cannot be static, if the underlying > substrate isn't. And if there's life, it's not static. Unless the > cop keeps beating you into submission every time you deviate from > the rules. > Which could be acceptable if the rules are acceptable. Imagine that there is a particular kind of physics experiment that causes cosmic vacuum decay. The system monitors all activity, and stomps on attempts at making the experiment. Everybody knows about the limitation and can see the logic of it. It might be possible to circumvent the system, but it would take noticeable resources that fellow inhabitants would recognize and likely object too. Now, is this really unacceptable and/or untenable? The rigidity of rules the singleton enforces can be all over the place from deterministic stimulus-responses to the singleton being some kind of AI or collective mind. The legitimacy can similarly be all over the place, from a basement accidental hard takeoff to democratic one-time decisions to something that is autonomous but designed to take public opinion into account. There is a big space of possible singleton designs. > Let's look at a population of cultures the size of a galaxy. How do > you produce an existential risk within a single system that can wipe > more than a stellar system? In order to produce larger scale mayhem > you need to utilize the resources of a large number of stellar > systems concertedly, which requires large scale cooperation of > pangalactic EvilDoers(tm). > If existential risks are limited to local systems, then at most there is a need for a local singleton (and maybe none, if you like living free and dangerously). However, there might be threats that require wider coordination or at least preparation. Imagine interstellar "grey goo" (replicators that weaponize solar systems and try to use existing resources to spread), and a situation of warfare where the square of number of units gives the effective strength (as per the Lanchester law; whether this is actually true in real life will depend on a lot of things). In that case allowing the problem to grow in a few systems far enough would allow it to become overwhelming. In this case it might be enough to coordinate defensive buildup within a broad ring around the goo, but it would still require coordination - especially if there were the usual kind of public goods problems in doing it. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From jonkc at bellsouth.net Mon Jan 3 17:28:17 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 3 Jan 2011 12:28:17 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D20AE4F.1090501@satx.rr.com> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> <4D20AE4F.1090501@satx.rr.com> Message-ID: <62FDA911-88C6-4CF7-98C7-FED793891F4B@bellsouth.net> On Jan 2, 2011, at 11:56 AM, Damien Broderick wrote: > you have to hope nobody is stupid enough to perfectly copy your vitrified brain and leave it at that So you're OK with obtaining all the information needed to make another identical living brain, but if you were to "leave it at that" it would be insufficient, something very important would be missing that is needed to preserve the real you; to do that you think it is vitally important to keep all the particular atoms in that vitrified brain, even though science can find no difference between one atom of the same element and another (sorry Damien, I know it annoys you when I bring up little things like that, FACTS that contradict your worldview, but there it is). Of course a vitrified brain by itself does nobody any good so you wouldn't even want to preserve the spacial arrangement in it, you want the arrangement of those sacred atoms to mimic how they were when the brain was alive and well, and to do that you need to know how those atoms were arranged when the brain was working properly; and that's where information comes in. You have implied that you don't mind obtaining that information provided you carefully retain all your original atoms too, because they somehow have your name scratched on them even though the scientific method cannot read that writing. So I guess even you would think the following procedure would work, use a brain slicing machine like the one invented by Kenneth J. Hayworth who has already made 30 nanometers thick slices of mouse brains: http://www.nytimes.com/2010/12/28/science/28brainside.html?_r=1 Another good article at: http://www.nytimes.com/2010/12/28/science/28brain.html?src=twrhp After a slice is made an electron microscope is then used to make a high resolution photograph of each of those very very thin slices, he hasn't finished an entire mouse brain yet but Hayworth thinks they will in a few years. So long as the atoms in the slices were retained after they were photographed (for sentimental reasons or whatever) would you be satisfied that the brain made with that photographic information and constructed from the same atoms would be sufficient to preserve the real you? Hayworth thinks so, although he makes no mention of bothering to retain the original atoms. Some quotes from the above articles: ?The circuitry of the brain will be mapped,? Mr. Hayworth predicted. ?We will understand how this network of neurons is connected, how it stores memories, how it preserves the skills a person has and how these connections give rise to emotion.? "Mr. Hayworth goes so far as to suggest that a person?s brain map could be replicated in a computer one day. In essence, someone could download their brain structure into a machine and have his or her personality live on." ?In 100 years, if we have the technology to bring someone back, it won?t be in a biological body,? he said. ?It is these scanning techniques and mind-uploading that, I think, will bring people back.? ?This is a taboo topic in the scientific community,? he said. ?But we have a cure to death right here. Why aren?t we pursuing it?? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Jan 3 17:47:26 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 3 Jan 2011 10:47:26 -0700 Subject: [ExI] Spacecraft (was MM) Message-ID: On Mon, Jan 3, 2011 at 5:00 AM, Samantha Atkins wrote: > > On Jan 1, 2011, at 5:39 PM, Keith Henson wrote: > >> On Sat, Jan 1, 2011 at 1:19 PM, ?Samantha Atkins wrote: >> >>> On Jan 1, 2011, at 2:40 AM, Keith Henson wrote: >>> >>>> On Fri, Dec 31, 2010 at 11:07 PM, ?Samantha Atkins wrote: > > > >> >>>>> >>>>>> Based on >>>>>> Jordin Kare's work, this takes a flotilla of mirrors in GEO. ?Current >>>>>> space technology is good enough to keep the pointing error down to .7 >>>>>> meters at that distance while tracking the vehicle. ?The lasers don't >>>>>> need to be on the equator so they can be placed where there is grid >>>>>> power. ?They need to be 30-40 deg to the east of the lunch point. >>>>>> >>>>> >>>>> Uh huh. ?What is the max distance you are speaking of? >>>> >>>> Around one sixth of the circumference 40,000/6, 6,666 km. >>> >>> That amounts to about 0.002 MOA tracking a rocket through atmosphere. >> >> MOA? >> > > Miinute of arc. Ah. The Hubble, which has been up for 20 years and is based on technology at least 10 years before that, has a pointing accuracy of 7 milliarcseconds. A milliarcsecond is about 5 x 10^-9 radians, so 7 would be about 35 x 10-9 rad. At the end of a 36,000,000 m radius, the error would be ~1.3 m >>> If we can do that then we can shoot down any old missile, any time with perfect accuracy. >> >> The possibility of the laser beam going off target for some reason is >> why you want a long path to the east over water. >> >> But yes, this transport method does have some rather obvious military >> applications. ?A 6 GW laser beam delivers the energy of 1.5 tons of >> TNT per second. >> > > What I was attempting to point out is that we obviously do not have this kind of ability today nor, as far as a quick scan showed, is it expected any time soon. It would be a ten fold scale up of the largest CW laser, then buy them in the thousands. Of course it will *never* be done unless some group or government decides to do it. > >> snip >> >>> The current record for a small test vehicle climbing an admittedly low power beam is measured in the hundreds of feet. >> >> The ones that have gone up a few hundred feed are not related at all >> to this kind of setup. ?They only work in the atmosphere. ?This works >> best outside. >> > > But you have a very long tracking path in conditions containing many possible sorts of turbulent and perturbation from ideal paths. ?One of the challenges of the earth side experiment was in dealing with these to keep the beam properly centered. I don't think so. I have seen the video at the Beamed Energy Propulsion Conference a year ago and as I recall, the laser was fixed and the vehicle self centered in the beam. > I expect that even only using laser propulsion starting around 300 km up would still have some such issues. Laser propulsion of this kind has been proposed for a considerable time without (as far as I know) finding such problems. There were very similar proposals to use lasers from space to power aircraft back in the mid to late 70s. > > >>> A power beam that strong would bring issues of whether it would propel or melt the nozzles. ?If the beam got a bit off center then it could be a real danger to the rocket itself which presumably is not of a high melting point alloy such as the nozzles would be. >> >> The current thoughts on the design has the laser beam going through a >> sapphire window filled with cold flowing 10-20 bar hydrogen. ?6 GW >> sounds like a lot, but it is absorbed over close to 1000 square >> meters. > > So I am tracking a target approximately 32 m across up to 6000 km away with a mirror system that moves according to where the target "should" be rather than where it perhaps actually is due to the unexpected and/or incalculable. 36,000 km. A vehicle moving at upwards of 2 km/sec isn't going to deviate much from the expected path. It's also a cooperative target that would be telling the control system acceleration details and location. > > One nice thing about these big laser beams to orbit is they could accidentally or on purpose de-orbit a lot of space junk that has accumulated there. ?:) ?Hmmm. ?Perhaps a decent feasibility test is to do some such target shooting. ? ?Of course there would me a major international uproar over such. It's a topic of considerable interest. Short high intensity pulses work better. Google "laser ablation" "space debris" A CW laser is not as efficient, but small objects would just be vaporized in a few seconds. https://e-reports-ext.llnl.gov/pdf/245817.pdf It's possible to absorb MW per square meter into a flowing gas stream, but the radiation equilibrium is probably above the boiling point of aluminum, but probably less than reentry temperature. >> ?So that's 6 MW per square meter. ?That's in the range of what >> happens inside the fire box of a coal fired power plant. ?Thought >> about on a smaller scale, it's 600 W per square cm. ?It's not hard to >> imagine a 1 cm square hole dumping 600 watts of heat into a flowing >> stream of hydrogen and heating the gas to 3000 deg K. ?Regen cooling >> keeps the nozzle from getting too hot. >> > > Yes, I believe that part can work in principle. ?I am worried by the required accuracy under real conditions though. > >>> The aiming is by no means trivial. >> >> I didn't mean to give the impression it was. ?However, the pointing >> accuracy of Hubble is less than a meter from GEO to the trajectory about a meter >> path. ?Tracking is slow traversing about 8 deg in 900 sec. >> >>> Nor is the amount of power needed by the lasers. >> >> It's a huge consideration. ?At 50% overall, the grid draw would be 12 >> GW. ?On the other hand, Three Gorges is 22 GW. > > So if the average SBSS produces 5 GW it will take nearly all the output of three of them to run this sort of launch pattern. Yep. This is in the context of building 200 GW a year so the feedback isn't excessive. > Is the 12 GW the minimum necessary for using this type of launch on this size of payload? ?The answer changes the payoff and initial cost times considerably. You can bootstrap, especially with the LEO to GEO stage. >>> How do the orbital mirrors station keep reflecting that intense a power beam? >> >> It's not particularly intense. ?The mirrors in GEO are 30 meters across. > > How much of the power beam is hitting each one? One part in a thousand roughly. >> >>> What is the required station keeping and mirror adjustment speed? >> >> You can compensate for the light pressure by orbiting 4 km inside GEO. >> Tracking is as above, slow. > > Slow tracking gives no room for any perturbations in flight path, right? Talk to Spike about the control problem. But the acceleration is modest, around a g, and while the velocity is high, the angular velocity tracking the vehicle is low and you get feedback in less than 1/10th of a second. > ?Using ablative laser launch there almost certainly will be perturbations. ?You are boiling off material which changes the effective beam strength in what seems to me a rather chaotic roiling pattern. With ablation you have to wait for the previous vapor cloud to get out of the way, typically about a ms. > But the description of the window etc above may be of help in handling this problem by directing the superheated hydrogen. ? It would be great to see a ground based demonstration of such an engine in a controlled smaller environment. ? ?One of the objectives would be understanding the likely turbulence. > It might not even cost a lot if laser diodes could be used directly through the window. A 3 by 3 array of absorber channels would take under 10 kW to heat. >> >>> What kind of lasers do you have in mind for this application. ?This site, http://www.rp-photonics.com/high_power_lasers.html, doesn't lead me thing multi GW lasers are particularly straightforward especially no for such sustained high precision power levels. >> >> I don't understand why you think high precision power levels are required. > > > You don't? ?You want a combined 6 GW I believe. ?Even if you use a lot of lower powered lasers you have spread the problem out over many many beams to arrive on target. ?How many beams are you thinking of? At least a thousand. Maybe 3% hot spares. >> >>> The most powerful ground based lasers I could find were anti-missile lasers that seemed to top out at 10 MW or so. ?These were not atmosphere compensated. ?How much power will you lose to atmosphere compensation? ?I understand thus far that atmospheric self-focusing only works in narrow power ranges defined by the type of laser used, atmospheric conditions and amount of atmosphere to be traversed. ? All of this doesn't lead me to belief this is so straightforward. >> >> I really don't like arguments from authority, but Dr. Jordin Kare >> http://en.wikipedia.org/wiki/Jordin_Kare knows far more about this >> than I do. ?However, the proposal does not use power levels where you >> get atmospheric distortions. ?Clouds at the laser end will be a >> problem. >> > > > What I can find from Jordin Kare hasn't set my mind at ease on these questions. It's not likely to be an energy solution for the US so you don't need to be concerned. It's more likely to be something the Chinese do. Keith From jonkc at bellsouth.net Mon Jan 3 18:04:12 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 3 Jan 2011 13:04:12 -0500 Subject: [ExI] Singletons In-Reply-To: <4D22079E.2050201@aleph.se> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> <20110101164211.GA16518@leitl.org> <4D22079E.2050201@aleph.se> Message-ID: <237EC41B-330E-4ADB-98E2-B65294F73E3C@bellsouth.net> On Jan 3, 2011, at 12:30 PM, Anders Sandberg wrote: > Imagine that there is a particular kind of physics experiment that causes cosmic vacuum decay. Cosmic vacuum decay utterly destroying the entire universe, I hate it when that happens. But as Dr. Brown has reminded us "Of course that is a worst case scenario, the effect could be much more localized and just destroy the galaxy". It is difficult to predict how a newly discovered force in Physics will behave, that's why it's new. Madam Curie was certainly not stupid, and when she first discovered Radium she had not one scrap of information to think that the strange rays given off by that element were in any way dangerous, but it ended up killing her. Suppose nature is unkind on a much larger scale. Suppose that in the technological history of almost any civilization there will come a time when it will find hints of a new force in nature, and suppose there is a very obvious experiment to investigate that possibility, and suppose because it is so new there is not one scrap of information to think it is in any way dangerous so the experiment is performed. And then oblivion. Perhaps that explains the Fermi Paradox, in the context of Everett's Many World interpretation we happen to be living in a fantastically unlikely universe where nobody has thought of that very obvious and simple experiment, yet. John K Clark > The system monitors all activity, and stomps on attempts at making the experiment. Everybody knows about the limitation and can see the logic of it. It might be possible to circumvent the system, but it would take noticeable resources that fellow inhabitants would recognize and likely object too. > > Now, is this really unacceptable and/or untenable? > > > The rigidity of rules the singleton enforces can be all over the place from deterministic stimulus-responses to the singleton being some kind of AI or collective mind. The legitimacy can similarly be all over the place, from a basement accidental hard takeoff to democratic one-time decisions to something that is autonomous but designed to take public opinion into account. There is a big space of possible singleton designs. > > >> Let's look at a population of cultures the size of a galaxy. How do >> you produce an existential risk within a single system that can wipe more than a stellar system? In order to produce larger scale mayhem >> you need to utilize the resources of a large number of stellar >> systems concertedly, which requires large scale cooperation of >> pangalactic EvilDoers(tm). >> > > If existential risks are limited to local systems, then at most there is a need for a local singleton (and maybe none, if you like living free and dangerously). > > However, there might be threats that require wider coordination or at least preparation. Imagine interstellar "grey goo" (replicators that weaponize solar systems and try to use existing resources to spread), and a situation of warfare where the square of number of units gives the effective strength (as per the Lanchester law; whether this is actually true in real life will depend on a lot of things). In that case allowing the problem to grow in a few systems far enough would allow it to become overwhelming. In this case it might be enough to coordinate defensive buildup within a broad ring around the goo, but it would still require coordination - especially if there were the usual kind of public goods problems in doing it. > > > -- > Anders Sandberg, > Future of Humanity Institute > Philosophy Faculty of Oxford University > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Mon Jan 3 18:10:26 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 3 Jan 2011 19:10:26 +0100 Subject: [ExI] Singletons In-Reply-To: <4D22079E.2050201@aleph.se> References: <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> <20110101164211.GA16518@leitl.org> <4D22079E.2050201@aleph.se> Message-ID: <20110103181026.GB16518@leitl.org> On Mon, Jan 03, 2011 at 06:30:06PM +0100, Anders Sandberg wrote: > Which could be acceptable if the rules are acceptable. Imagine that The question is who makes the rules? Imagine a lowest common denominator rule enforcer, using quorum of all people on this planet. A very scary thought. > there is a particular kind of physics experiment that causes cosmic > vacuum decay. The system monitors all activity, and stomps on attempts > at making the experiment. Everybody knows about the limitation and can Sure, world-ending stuff (don't think this this universe allows it, orelse we wouldn't be be able to read thi > see the logic of it. It might be possible to circumvent the system, but > it would take noticeable resources that fellow inhabitants would > recognize and likely object too. > > Now, is this really unacceptable and/or untenable? I think it's unacceptable, because I don't believe such a thing could be done without creating terrible side effects. > The rigidity of rules the singleton enforces can be all over the place > from deterministic stimulus-responses to the singleton being some kind Stimulus-reponse would be a) not terribly efficacious, since easily circumvented b) fraught with friendly fire > of AI or collective mind. The legitimacy can similarly be all over the Ah, so it's our usual kind of despot. > place, from a basement accidental hard takeoff to democratic one-time > decisions to something that is autonomous but designed to take public Aargh. So the singleton can do whatever it wants by tweaking the physical layer. > opinion into account. There is a big space of possible singleton designs. That's the precise problem with this. It's basically Blight in a sheep's clothing. > If existential risks are limited to local systems, then at most there is > a need for a local singleton (and maybe none, if you like living free > and dangerously). Sometimes I get a cold. I can live with that. > However, there might be threats that require wider coordination or at > least preparation. Imagine interstellar "grey goo" (replicators that > weaponize solar systems and try to use existing resources to spread), That's basically the "getting a cold" scenario. Gray goo which can cross interstellar distances is indistinguishable from pioneers. Nothing to worry about, unless you haven't been born yet, or your immune system is yet undeveloped. The probability that the wave of common cold catches you just as you're being born is pretty much zero. > and a situation of warfare where the square of number of units gives the > effective strength (as per the Lanchester law; whether this is actually > true in real life will depend on a lot of things). In that case allowing > the problem to grow in a few systems far enough would allow it to become > overwhelming. In this case it might be enough to coordinate defensive I don't see how blowing up thy neighbour is going to help you with taking over thy neighbor's resources (including chocolate and women). > buildup within a broad ring around the goo, but it would still require > coordination - especially if there were the usual kind of public goods > problems in doing it. Dunno, doesn't sound terribly convincing. Luckily, we're dealing in theoreticals here. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Mon Jan 3 18:39:45 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 3 Jan 2011 13:39:45 -0500 Subject: [ExI] Asimov's 90th today In-Reply-To: <201101030158.p031wnnK002458@andromeda.ziaspace.com> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> Message-ID: <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> On Jan 2, 2011, at 8:58 PM, David Lubkin wrote: > Fred Pohl's best friend was Isaac True. > one of the two people he would admit was smarter than he. (The other was Minsky.) Actually no, in his autobiography he said that both he and Pohl had IQ tests, I don't remember the exact numbers but both were in the upper 150's, however Asimov beat Pohl by one point. The only two people that Asimov had ever met that he thought were smarter than him were Martin Minsky and Carl Sagan. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 3 18:48:26 2011 From: spike66 at att.net (spike) Date: Mon, 3 Jan 2011 10:48:26 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: References: Message-ID: <00a101cbab76$cea19e20$6be4da60$@att.net> ... On Behalf Of Keith Henson ... >>> >>>> That amounts to about 0.002 MOA tracking a rocket through atmosphere. >> >>> MOA? >> >> Miinute of arc. >Ah. The Hubble, which has been up for 20 years and is based on technology at least 10 years before that, has a pointing accuracy of 7 milliarcseconds. A milliarcsecond is about 5 x 10^-9 radians, so 7 would be about 35 x 10-9 rad. At the end of a 36,000,000 m radius, the error would be ~1.3 m Ja, there are two different things being discussed here, actually three: tracking objects thru atmosphere, tracking a moving object and Hubble boresight accuracy. The Hubble is indeed an impressive control system, but it cannot track moving objects very competently. It is really really good at doing what it was designed to do, fix on a dim object and stay right on it for long periods. But it isn't nimble, doesn't need to be for that application. One can steer a battleship with a canoe paddle, if one is patient and has a few days to get it done. For anything that moves or depends on an optical feedback, an arc second of accuracy is probably still out of our reach, but there is plenty of useful stuff we could do with arc-second class ground based tracking. ... >> Slow tracking gives no room for any perturbations in flight path, right? >Talk to Spike about the control problem. But the acceleration is modest, around a g, and while the velocity is high, the angular velocity tracking the vehicle is low and you get feedback in less than 1/10th of a second...Keith There are ways to do stuff like this using fast steering secondary mirrors, adaptable aperture, lostsa cool notions for laser propulsion that were actually developed from a weapons program, Airborne Laser: http://en.wikipedia.org/wiki/Boeing_YAL-1 Booeing was the prime contractor for this, but most of the pointing control and accuracy infrastructure was subcontracted to Lockheed Missiles and Space Company, in Sunnyvale Taxifornia, and also the Lockheed Advanced Technologies Center in Palo Alto, using Lockheed Martin control technology and Lockheed Martin control engineers, who incidently worked for Lockheed Martin. Of course we used a Booeing product to carry our control system aloft, so those lads deserve credit where credit is due, but the control system was pure LMCo, and it is WICKED cool, do let me assure you, clever as all hell. Looks to me like we could adapt the accuracy infrastructure of the ABL to fly at about 10 to 12 km altitude and provide second stage ablative boost assist, from about 20 km to about 80-ish km altitude. So first stage mostly solid propulsion, second stage ABL ablative boost, third stage throwaway H2/LOX? We would need to get tricky with our optical feedback loops, but I think this is doable with current control law. spike From thespike at satx.rr.com Mon Jan 3 19:11:40 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 03 Jan 2011 13:11:40 -0600 Subject: [ExI] Asimov's 90th [91st] today In-Reply-To: <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> Message-ID: <4D221F6C.80508@satx.rr.com> On 1/3/2011 12:39 PM, John Clark wrote: > >> one of the two people he would admit was smarter than he. (The other >> was Minsky.) > > Actually no, in his autobiography he said that both he and Pohl had IQ > tests, I don't remember the exact numbers but both were in the upper > 150's, however Asimov beat Pohl by one point. That's what Dave Lubkin said: one of the two people Pohl admitted was smarter than he. > The only two people that > Asimov had ever met that he thought were smarter than him were Martin > Minsky and Carl Sagan. And Heinz Pagels. Damien Broderick From thespike at satx.rr.com Mon Jan 3 19:13:28 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 03 Jan 2011 13:13:28 -0600 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <62FDA911-88C6-4CF7-98C7-FED793891F4B@bellsouth.net> References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> <4D20AE4F.1090501@satx.rr.com> <62FDA911-88C6-4CF7-98C7-FED793891F4B@bellsouth.net> Message-ID: <4D221FD8.3000103@satx.rr.com> On 1/3/2011 11:28 AM, John Clark wrote: > >> you have to hope nobody is stupid enough to perfectly copy your >> vitrified brain and leave it at that > > So you're OK with obtaining all the information needed to make another > identical living brain Read what I wrote. A vitrified brain is not a living brain. Damien Broderick From alaneugenebrooks52 at yahoo.com Mon Jan 3 18:42:24 2011 From: alaneugenebrooks52 at yahoo.com (Alan Brooks) Date: Mon, 3 Jan 2011 10:42:24 -0800 (PST) Subject: [ExI] what a surprise Message-ID: <752706.72336.qm@web46110.mail.sp1.yahoo.com> Was at the Alcor site today and saw Max's photo in the CEO slot. A good deal as it is a two-for-one offer: with Max you get Natasha also, and that is good because as an Alcor member one might think two heads are better than one-- like getting Hillary with Bill in '92! [will unsubscribe after this msg as there is no shortage of posts at Extropy] -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Mon Jan 3 18:56:13 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Mon, 3 Jan 2011 19:56:13 +0100 Subject: [ExI] Spacecraft (was MM) In-Reply-To: References: Message-ID: On Mon, Jan 3, 2011 at 6:47 PM, Keith Henson wrote: > On Mon, Jan 3, 2011 at 5:00 AM, Samantha Atkins wrote: > > > > On Jan 1, 2011, at 5:39 PM, Keith Henson wrote: > > > >> On Sat, Jan 1, 2011 at 1:19 PM, Samantha Atkins > wrote: > >> > >>> On Jan 1, 2011, at 2:40 AM, Keith Henson wrote: > >>> > >>>> On Fri, Dec 31, 2010 at 11:07 PM, Samantha Atkins > wrote: > > > > > > > >> > >>>>> > >>>>>> Based on > >>>>>> Jordin Kare's work, this takes a flotilla of mirrors in GEO. > Current > >>>>>> space technology is good enough to keep the pointing error down to > .7 > >>>>>> meters at that distance while tracking the vehicle. The lasers > don't > >>>>>> need to be on the equator so they can be placed where there is grid > >>>>>> power. They need to be 30-40 deg to the east of the lunch point. > >>>>>> > >>>>> > >>>>> Uh huh. What is the max distance you are speaking of? > >>>> > >>>> Around one sixth of the circumference 40,000/6, 6,666 km. > >>> > >>> That amounts to about 0.002 MOA tracking a rocket through atmosphere. > >> > >> MOA? > >> > > > > Miinute of arc. > > Ah. The Hubble, which has been up for 20 years and is based on > technology at least 10 years before that, has a pointing accuracy of 7 > milliarcseconds. A milliarcsecond is about 5 x 10^-9 radians, so 7 > would be about 35 x 10-9 rad. At the end of a 36,000,000 m radius, > the error would be ~1.3 m > You are finally touching points where I can contribute something :-) Typical pointing accuracy for ground-based telescopes are on the order of 1 arcseconds, sometimes worse. The Hubble doesn't do much better (some references I saw speak of 0.2-0.5 arcsec, which I would regard as very good). The number you quote (7 milliarcsec) is for pointing *stability*, with feedback from a guide star. Hubble has the advantage of working outside the atmosphere, in diffraction-limited mode. A normal telescope inside the atmosphere has worse stability (20-30 milliarcsec or more), because the guide star feedback is smeared by atmospheric aberrations. A telescope equipped with an adaptive optics system (AO) can again reach a stability in the single-digit milliarcseconds range. All these numbers are for fixed, low-speed objects whose trajectory can be computed in advance. Therefore a laser-propulsion device cannot work without an AO-enabled launch system, which can keep your laser beam focused on your target and will track its motion. This supposes that the payload can give optical feedback about its position, but I imagine that the ablation process will make it quite bright :-) Such AO systems are now commonplace on big telescopes, but work in the milliwatt regime instead of GW, since they just reflect starlight :-) Laser-guided AO systems, where a laser is shoot up in the sky to create a "fake" guide star, are starting to work right now, and if I remember correctly some of them are planning an adaptive launch mirror. Such lasers are on the order of 10 watts. I don't have the foggiest idea of what happens when the power is scaled by a factor of 10^9. Having seen the safety systems for such lasers, I would imagine that a multi-GW laser will be treated like a nuclear-testing ground. At least, I would keep the same distance. Alfio > Keith > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 3 19:18:45 2011 From: spike66 at att.net (spike) Date: Mon, 3 Jan 2011 11:18:45 -0800 Subject: [ExI] Singletons In-Reply-To: <237EC41B-330E-4ADB-98E2-B65294F73E3C@bellsouth.net> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> <20110101164211.GA16518@leitl.org> <4D22079E.2050201@aleph.se> <237EC41B-330E-4ADB-98E2-B65294F73E3C@bellsouth.net> Message-ID: <00b601cbab7b$0ac76d40$205647c0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Subject: Re: [ExI] Singletons On Jan 3, 2011, at 12:30 PM, Anders Sandberg wrote: >>.Imagine that there is a particular kind of physics experiment that causes cosmic vacuum decay. Anders >.Perhaps that explains the Fermi Paradox, in the context of Everett's Many World interpretation we happen to be living in a fantastically unlikely universe where nobody has thought of that very obvious and simple experiment, yet.John K Clark I thought of a related notion: perhaps there is some technology that causes a star to collapse to neutronium, so that in general, technology-enabled planets eventually stumble upon it and accidentally big-crunch the star about which they orbit, subsequently freezing all the surrounding planets, resulting in that tech-enabled life form disappearing forever without a whimper. I would reach for that as an explanation for dark matter, but it has a problem: it doesn't explain why the dark matter in a galaxy seems to be way more prevalent in the outer regions of a galaxy than in the galactic core. Perhaps the galactic core stars are more often sterilized by nearby gamma ray bursts, so technology has more time to develop in the galactic suburbs? I rely on imaginative SF writers to speculate on an explanation. spike The system monitors all activity, and stomps on attempts at making the experiment. Everybody knows about the limitation and can see the logic of it. It might be possible to circumvent the system, but it would take noticeable resources that fellow inhabitants would recognize and likely object too. Now, is this really unacceptable and/or untenable? The rigidity of rules the singleton enforces can be all over the place from deterministic stimulus-responses to the singleton being some kind of AI or collective mind. The legitimacy can similarly be all over the place, from a basement accidental hard takeoff to democratic one-time decisions to something that is autonomous but designed to take public opinion into account. There is a big space of possible singleton designs. Let's look at a population of cultures the size of a galaxy. How do you produce an existential risk within a single system that can wipe more than a stellar system? In order to produce larger scale mayhem you need to utilize the resources of a large number of stellar systems concertedly, which requires large scale cooperation of pangalactic EvilDoers(tm). If existential risks are limited to local systems, then at most there is a need for a local singleton (and maybe none, if you like living free and dangerously). However, there might be threats that require wider coordination or at least preparation. Imagine interstellar "grey goo" (replicators that weaponize solar systems and try to use existing resources to spread), and a situation of warfare where the square of number of units gives the effective strength (as per the Lanchester law; whether this is actually true in real life will depend on a lot of things). In that case allowing the problem to grow in a few systems far enough would allow it to become overwhelming. In this case it might be enough to coordinate defensive buildup within a broad ring around the goo, but it would still require coordination - especially if there were the usual kind of public goods problems in doing it. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Mon Jan 3 19:43:04 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 3 Jan 2011 20:43:04 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D221FD8.3000103@satx.rr.com> References: <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> <4D20AE4F.1090501@satx.rr.com> <62FDA911-88C6-4CF7-98C7-FED793891F4B@bellsouth.net> <4D221FD8.3000103@satx.rr.com> Message-ID: <20110103194304.GD16518@leitl.org> On Mon, Jan 03, 2011 at 01:13:28PM -0600, Damien Broderick wrote: >> So you're OK with obtaining all the information needed to make another >> identical living brain > > Read what I wrote. A vitrified brain is not a living brain. A vitrified brain is a snapshot, potentially enough to resume the original process (you'll be dropping a few bits on the floor, as short-term memory will not be consolidated, so you'll lose at least a couple hours). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Mon Jan 3 20:09:45 2011 From: spike66 at att.net (spike) Date: Mon, 3 Jan 2011 12:09:45 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: <00a101cbab76$cea19e20$6be4da60$@att.net> References: <00a101cbab76$cea19e20$6be4da60$@att.net> Message-ID: <00d301cbab82$2ae26790$80a736b0$@att.net> ... On Behalf Of spike Subject: Re: [ExI] Spacecraft (was MM) ... On Behalf Of Keith Henson... >>Talk to Spike about the control problem...Keith >There are ways to do stuff like this using fast steering secondary mirrors, adaptable aperture, lostsa cool notions for laser propulsion that were actually developed from a weapons program, Airborne Laser: http://en.wikipedia.org/wiki/Boeing_YAL-1 ... most of the pointing control and accuracy infrastructure was subcontracted to Lockheed Missiles ... We would need to get tricky with our optical feedback loops, but I think this is doable with current control law. spike Ooops I realized the way I worded this earlier may be misleading. The ABL pointing accuracy and stability is not arc-second class, or even close to that. You really can't get all the way down to arcseconds at all on a moving platform or when one is firing thru air. Eewww, messy stuff is this "air." I like breathing as much as the next guy, but the atmosphere is nothing but problems for the optical controls guys. An unpredictable, dirty, shifty, diffacty bastard is this "atmosphere." If we could assume it away, our task would be so much simpler and cleaner. In any case, we could still do second stage ablative boost firing thru this "atmosphere" methinks. spike From lubkin at unreasonable.com Mon Jan 3 20:39:20 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Mon, 03 Jan 2011 15:39:20 -0500 Subject: [ExI] Asimov's 90th today In-Reply-To: <4D221F6C.80508@satx.rr.com> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> <4D221F6C.80508@satx.rr.com> Message-ID: <201101032038.p03Kcv72027312@andromeda.ziaspace.com> Asimov *did* say that Fred Pohl and Marvin Minsky were the only two people he acknowledged as being smarter than himself. But I would not be at all surprised if he has told more than one version. I'm not going to hunt for a proper citation (I *so* wish I could grep through my book collection) but my correspondence file is at hand, and is consistent with what I wrote -- > 25 October 1977 > > Dear David, > > In re Fred: > > I have known Fred since 1938 and in all those nearly four decades, > I have never ceased to marvel at his intelligence and wit. > > He knows more about more things than anyone else I know, and he > is cleverer at writing about them all than anyone else I know. > > He is also a sweet guy, who helped me in my career more than > anyone but John Campbell, himself. I explained about that in THE > EARLY ASIMOV and I am doing it again in my autobiography, > now in preparation. > > Isaac -- David. From hkeithhenson at gmail.com Mon Jan 3 23:47:29 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 3 Jan 2011 16:47:29 -0700 Subject: [ExI] Spacecraft (was MM) Message-ID: On Mon, Jan 3, 2011 at 12:32 PM, wrote: > From: "spike" >>>>> That amounts to about 0.002 MOA tracking a rocket through atmosphere. >>> >>>> MOA? >>> >>> Miinute of arc. > >>Ah. ?The Hubble, which has been up for 20 years and is based on technology > at least 10 years before that, has a pointing accuracy of 7 milliarcseconds. > A milliarcsecond is about 5 x 10^-9 radians, so 7 would be about 35 x 10-9 > rad. ?At the end of a 36,000,000 m radius, the error would be ~1.3 m > > Ja, there are two different things being discussed here, actually three: > tracking objects thru atmosphere, tracking a moving object and Hubble > boresight accuracy. ?The Hubble is indeed an impressive control system, but > it cannot track moving objects very competently. ?It is really really good > at doing what it was designed to do, fix on a dim object and stay right on > it for long periods. ?But it isn't nimble, doesn't need to be for that > application. ?One can steer a battleship with a canoe paddle, if one is > patient and has a few days to get it done. > > For anything that moves or depends on an optical feedback, an arc second of > accuracy is probably still out of our reach, but there is plenty of useful > stuff we could do with arc-second class ground based tracking. > > ?... > >>> Slow tracking gives no room for any perturbations in flight path, right? > >>Talk to Spike about the control problem. ?But the acceleration is modest, > around a g, and while the velocity is high, the angular velocity tracking > the vehicle is low and you get feedback in less than 1/10th of a > second...Keith > > There are ways to do stuff like this using fast steering secondary mirrors, > adaptable aperture, lostsa cool notions for laser propulsion that were > actually developed from a weapons program, Airborne Laser: > > http://en.wikipedia.org/wiki/Boeing_YAL-1 > > Booeing was the prime contractor for this, but most of the pointing control > and accuracy infrastructure was subcontracted to Lockheed Missiles and Space > Company, in Sunnyvale Taxifornia, and also the Lockheed Advanced > Technologies Center in Palo Alto, using Lockheed Martin control technology > and Lockheed Martin control engineers, who incidently worked for Lockheed > Martin. ?Of course we used a Booeing product to carry our control system > aloft, so those lads deserve credit where credit is due, but the control > system was pure LMCo, and it is WICKED cool, do let me assure you, clever as > all hell. > > Looks to me like we could adapt the accuracy infrastructure of the ABL to > fly at about 10 to 12 km altitude and provide second stage ablative boost > assist, from about 20 km to about 80-ish km altitude. ?So first stage mostly > solid propulsion, second stage ABL ablative boost, third stage throwaway > H2/LOX? ?We would need to get tricky with our optical feedback loops, but I > think this is doable with current control law. > > spike > > > > From: Alfio Puglisi >> Ah. ?The Hubble, which has been up for 20 years and is based on >> technology at least 10 years before that, has a pointing accuracy of 7 >> milliarcseconds. ?A milliarcsecond is about 5 x 10^-9 radians, so 7 >> would be about 35 x 10-9 rad. ?At the end of a 36,000,000 m radius, >> the error would be ~1.3 m >> > > You are finally touching points where I can contribute something :-) > > Typical pointing accuracy for ground-based telescopes are on the order of 1 > arcseconds, sometimes worse. The Hubble doesn't do much better (some > references I saw speak of 0.2-0.5 arcsec, which I would regard as very > good). The number you quote (7 milliarcsec) is for pointing *stability*, > with feedback from a guide star. Hubble has the advantage of working outside > the atmosphere, in diffraction-limited mode. A normal telescope inside the > atmosphere has worse stability (20-30 milliarcsec or more), because the > guide star feedback is smeared by atmospheric aberrations. A telescope > equipped with an adaptive optics system (AO) can again reach a stability in > the single-digit milliarcseconds range. > > All these numbers are for fixed, low-speed objects whose trajectory can be > computed in advance. > > Therefore a laser-propulsion device cannot work without an AO-enabled launch > system, which can keep your laser beam focused on your target and will track > its motion. This supposes that the payload can give optical feedback about > its position, but I imagine that the ablation process will make it quite > bright :-) Jordin Kare convinced me that CW laser rather than short pulse ablation lasers are *much* nearer term on a GW scale. They are also much more efficient, turning upwards of 90% of the beam energy into kinetic energy in the exhaust vs 30% for ablation. And putting the whole Skylon derived stage into LEO solves the sticky problem of how you get it back to the launch point. If it is too much trouble for the beam to hit the vehicle, then we know the ideal location and velocity profile we need to put the vehicle in orbit. Maybe we just sweep the beam without feedback and let the vehicle keep up with it. Given how fast hydrogen is moving through the vehicle's plumbing, it should be fairly easy to acquire the beam and keep it in the right spot. > Such AO systems are now commonplace on big telescopes, but work in the > milliwatt regime instead of GW, since they just reflect starlight :-) > Laser-guided AO systems, where a laser is shoot up in the sky to create a > "fake" guide star, are starting to work right now, and if I remember > correctly some of them are planning an adaptive launch mirror. Such lasers > are on the order of 10 watts. I don't have the foggiest idea of what happens > when the power is scaled by a factor of 10^9. Having seen the safety systems > for such lasers, I would imagine that a multi-GW laser will be treated like > a nuclear-testing ground. At least, I would keep the same distance. The lasers would be at most multi MW. It has always been assumed that they would use adaptive optics. A ten watt mm beam is 10 MW per square meter. The power going up to GEO would be way below that. Keith From msd001 at gmail.com Tue Jan 4 00:25:20 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 3 Jan 2011 19:25:20 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <41313.58400.qm@web114418.mail.gq1.yahoo.com> <4D1E1F5A.6020903@satx.rr.com> <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <20110101182448.GI16518@leitl.org> Message-ID: 2011/1/3 John Clark : > On Jan 1, 2011, at 3:19 PM, Mike Dougherty wrote: > > If I make a Perfect Copy(tm) then throw it down a black hole, are we?still > synch'd? > > They'd be about as well synched as clocks would be synched if you blew up > one with a stick of dynamite but not the other.?I don't see the point you > were trying to make and I sure don't see why you brought up something as > exotic as Black Holes into the discussion. I was content to leave my comment concluded with a divide-by-zero error, but since you asked... The exotic black hole was introduced for the sake of interrupting the machinery that verifies the copies are synchronized. Time dilation from near-lightspeed would be exotic too. The whole conversation is pretty much arbitrary anyway, right? By the time camp A constrains the experiment enough to provide any conclusive point, camp B cries foul for having violated a basic principle of their side of the argument. I guess it doesn't matter what the residents of either 10kg brick of computronium are thinking, all we observe is two 10kg bricks of computronium. Without a scale we could be fooled by a single 10kg brick and a cleverly positioned mirror, no? From sjatkins at mac.com Tue Jan 4 01:08:34 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 03 Jan 2011 17:08:34 -0800 Subject: [ExI] Von Neumann probes for what? In-Reply-To: References: <2EA10209-25C3-4985-A55C-D31A41B78BA7@mac.com> <20110101170536.GD16518@leitl.org> <51E1B34A-5562-484F-BA6C-B74479873D34@mac.com> Message-ID: <08BBBA6A-1079-46CF-82ED-ACEAEDB6413B@mac.com> On Jan 3, 2011, at 7:11 AM, Stefano Vaj wrote: > On 1 January 2011 19:03, Samantha Atkins wrote: >> Not so or I don't see why it would be so and the probe have any real capacity to do any good for the originating civ. A mere cosmic yeast mold is not terribly useful to anyone unless you like cosmic yeast. > > Unless you *are* the cosmic yeast. Well, my point was that if the probes are that dumb (*cosmic yeast*) that they do nothing but create more of themselves and spread then a) there would be little or no point and b) any advanced species encountering them would likely consider us boorish noobs at best and perhaps as would-be cancerous plight requiring eradication. > The only problematic aspect is the > jump from a biological species to a Neumann probe structure. Then > Darwinian mechanisms kick in, the program being the way the > replicators get replicated, not the replicators the way to diffuse the > program... > What for? The variation and selection process can be very much part of the program so that the probes do not drift from what they were meant to achieve too much. This is unnatural selection. - s From sjatkins at mac.com Tue Jan 4 01:28:25 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 03 Jan 2011 17:28:25 -0800 Subject: [ExI] Singletons In-Reply-To: <4D22079E.2050201@aleph.se> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> <20110101164211.GA16518@leitl.org> <4D22079E.2050201@aleph.se> Message-ID: <68344036-5EB7-4EAA-A287-31A65C4B71C2@mac.com> On Jan 3, 2011, at 9:30 AM, Anders Sandberg wrote: > Eugen Leitl wrote: >> The problem that the local rules cannot be static, if the underlying >> substrate isn't. And if there's life, it's not static. Unless the >> cop keeps beating you into submission every time you deviate from >> the rules. > > Which could be acceptable if the rules are acceptable. Imagine that there is a particular kind of physics experiment that causes cosmic vacuum decay. The system monitors all activity, and stomps on attempts at making the experiment. Everybody knows about the limitation and can see the logic of it. It might be possible to circumvent the system, but it would take noticeable resources that fellow inhabitants would recognize and likely object too. > > Now, is this really unacceptable and/or untenable? It is unacceptable to have any body enforcing not examining the possibility when said body has no idea whatsoever there is any particular danger. Such regulating bodies on the other hand are a clear and very present danger to any real progress forward. > > > The rigidity of rules the singleton enforces can be all over the place from deterministic stimulus-responses to the singleton being some kind of AI or collective mind. The legitimacy can similarly be all over the place, from a basement accidental hard takeoff to democratic one-time decisions to something that is autonomous but designed to take public opinion into account. There is a big space of possible singleton designs. > No singleton can have effective enough localized enough information feeds enabling it to outperform any/all more localized decision making systems. A singleton is by design a single point of failure. > >> Let's look at a population of cultures the size of a galaxy. How do >> you produce an existential risk within a single system that can wipe more than a stellar system? In order to produce larger scale mayhem >> you need to utilize the resources of a large number of stellar >> systems concertedly, which requires large scale cooperation of >> pangalactic EvilDoers(tm). >> > > If existential risks are limited to local systems, then at most there is a need for a local singleton (and maybe none, if you like living free and dangerously). Actually manufacturing a supernova affecting everything in many hundreds of light years is not likely that difficult. But that is hardly a reason to go wild making super-super cops ruling over countless civilizations. > > However, there might be threats that require wider coordination or at least preparation. Imagine interstellar "grey goo" (replicators that weaponize solar systems and try to use existing resources to spread), and a situation of warfare where the square of number of units gives the effective strength (as per the Lanchester law; whether this is actually true in real life will depend on a lot of things). It is exceedingly unlikely although totally dumb, replication crazed Von Neumann probes may come close. > In that case allowing the problem to grow in a few systems far enough would allow it to become overwhelming. In this case it might be enough to coordinate defensive buildup within a broad ring around the goo, but it would still require coordination - especially if there were the usual kind of public goods problems in doing it. Public good problem? Either there is a danger to the hopefully much more rational minds involved or there is not. They will act rationally to deal with it to the degree it really is that much of a danger. - s From bbenzai at yahoo.com Tue Jan 4 01:32:15 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 3 Jan 2011 17:32:15 -0800 (PST) Subject: [ExI] simulation as an improvement over reality In-Reply-To: Message-ID: <88714.4120.qm@web114401.mail.gq1.yahoo.com> Damien Broderick wrote: > On 12/31/2010 11:21 AM, Ben Zaiboc wrote: >> a copy of you*is a you*, exactly as a copy of Beethoven's 5th is Beethoven's 5th. The copy will be experiencing being you. How could it possibly be otherwise? > Dear dog in Himmel! NOBODY HAS EVER DENIED THIS! An exact copy of you MUST experience himself as you. That's not the problem. The real issue nobody ever seems to answer was posed by Stuart: > "you should be alright with your long lost twin brother showing up, locking you in the cellar, and assuming your identity. Or cheating death by brainwashing someone else into honestly believing they are you." > You'd be okay with that, Ben? You'd be mollified by the report that the Benified twin or brainwashee was a really, really good copy of you? You'd hand over your savings, house, spouse, children to this as-perfect-as-possible substitute, and sit quietly in the cellar knowing that "you" were having a really great time? > I don't think so. And you'd be quite right. Your argument is the same as the old one about a person who, after being copied, should be quite happy to shoot himself. Naturally that is silly. Nobody would be happy to shoot themselves, regardless of how many identical copies of them were in existence. The 'real issue' you pose above is not an issue at all. The issue being discussed is whether a person survives uploading or not. By the very definition of uploading, they must. A copied person would diverge from the 'original' in the very first instant they exist. There would now be two individual people, each with a common 'mind-ancestor', each with just as much claim to being the 'original', if that means anything, as the other. > It's a non-Abelian proposition. It's intransitive. Yes, the copy experiences self and world exactly as you do and is therefore *a* you. No, *you* here and now have no stake (other than empathy or envious hatred) in that replica consciousness, certainly not to the extent that you'd feel happy to be killed or locked in the cellar in order for that other Ben to remain alive and free. All I can say is this: ---------<=========== The single dashed line represents the original person. The Less-Than represents the point at which the copying occurs. The two dashed lines represent the two resulting copies. (The diagram is not meant to imply that the 'copies' are exactly in synchrony. They *will* diverge, even if only as a consequence of occupying different positions in space) Tell me, which of them is the 'original'? Can you see that question makes no sense? That the original only exists before the point of copying? After the copying, there are two descendants of this past original, in the same sense that right now there is a single descendant of *your* original of two minutes ago. To say that the copying process preserves the atoms of the historical original in one case, and not in the other, and that this makes all the difference, is to assert that atoms are embodying something essential about the self, something /that cannot be transferred to other atoms/. If this is the case, then it means we are all doomed. About every 7 days*. Doooooomed! Now I'm quite prepared to entertain, as an entertaining hypothesis, that I die every 7 days and am reborn as a completely different person, but I have to say that if this is the case, I really don't mind. A perfected uploading process should be no different to the periodic replacement of various essential molecules in the brain: Change the matter, preserve the pattern. That pattern (dynamic and enormously complex, but still a pattern) is THE SELF. Wherever your pattern is, there you are. It doesn't matter what types of atoms it is instantiated in, it doesn't matter how many instances of it there are. This "make a copy, then kill the original, he won't mind" concept is a red herring. Or rather, a straw man. It's definitely not 'the real issue'. Ben Zaiboc * Yes, I know that's a gross simplification. But it illustrates the principle. From bbenzai at yahoo.com Tue Jan 4 02:00:56 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 3 Jan 2011 18:00:56 -0800 (PST) Subject: [ExI] simulation as an improvement over reality In-Reply-To: Message-ID: <625101.864.qm@web114403.mail.gq1.yahoo.com> "spike" wrote: -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Ben Zaiboc >>...I'm about to go out, meet some friends and kill a few neurons with alcohol... > You have plenty to spare Ben. Have fun, drive carefully or get a designated driver. {8-] Well, I'm told that I had fun. Not so sure about the rest, although I do have a vague impression of the inside of a taxi. >>... In the morning I'll be waking up in a different bed... > Oh? Whose? That, sir, is irrelevant to the thrust of my argument. >>... will be in a different mood... > Depending of course on whose bed you are in when you awake... No, it was more a function of amount and type of ethanol + congeners. >> wearing different clothes... > Whose? And what is that person wearing? I can imagine your mood will depend on the answer to these questions. Almost certainly mine. I think. >>... have some different memories, even... > Ja, that whole party sounds like one which could produce "different memories." The stories from your friends may differ widely from yours later. Of course they may perform drunken antics as well. Indeed. My camera seems to have different memories to my brain. I remember having several quiet and witty conversations with a group of convivial socialites, followed by an elegant tea-dance. I have no idea who the tinsel-covered stamping hooting gorillas in the pictures are. >>... I'll still think of myself as the same person as last night, and be happy about it... > We will be more than happy, we will be overcome with mirth. Well, I have to admit defeat on that one. I'm a changed man since that night. My liver is still recovering, anyway. >>... I expect that uploading, once it's perfected, won't be that much different...Ben Zaiboc > Ben let's hope they get the fun subroutines working right. {8^] OhMan to that! > Happy New Year extropians! Happy New Year everyone!, even the miserable grumpy ones. Ben Zaiboc From agrimes at speakeasy.net Tue Jan 4 06:27:25 2011 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 04 Jan 2011 01:27:25 -0500 Subject: [ExI] FAQ over at Orion's Arm. Message-ID: <4D22BDCD.2010205@speakeasy.net> I was reading the FAQ over at Orion's Arm and this quote just jumped out at me and bit me in the brain. =( """"""""""""""""""""""""""""""" We are storytellers and for the sake of a good story we have assumed a different future to the optimistic singularitarian scenario. """"""""""""""""""""""""""""""" Up until that line, I had thought Orion's Arm was by far the most optimistic vision of the future I'd read in years. =~( -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From anders at aleph.se Tue Jan 4 10:22:23 2011 From: anders at aleph.se (Anders Sandberg) Date: Tue, 04 Jan 2011 11:22:23 +0100 Subject: [ExI] Singletons In-Reply-To: <68344036-5EB7-4EAA-A287-31A65C4B71C2@mac.com> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> <20110101164211.GA16518@leitl.org> <4D22079E.2050201@aleph.se> <68344036-5EB7-4EAA-A287-31A65C4B71C2@mac.com> Message-ID: <4D22F4DF.3080107@aleph.se> On 2011-01-04 02:28, Samantha Atkins wrote: >> Which could be acceptable if the rules are acceptable. Imagine that there is a particular kind of physics experiment that causes cosmic vacuum decay. The system monitors all activity, and stomps on attempts at making the experiment. Everybody knows about the limitation and can see the logic of it. It might be possible to circumvent the system, but it would take noticeable resources that fellow inhabitants would recognize and likely object too. >> >> Now, is this really unacceptable and/or untenable? > > It is unacceptable to have any body enforcing not examining the possibility when said body has no idea whatsoever there is any particular danger. Such regulating bodies on the other hand are a clear and very present danger to any real progress forward. I am a literally card-carrying libertarian (OK, the card is a joke card saying "No to death and taxes!"), so I am not fond of unnecessary regulation or coercion. But in order to protect freedoms there may be necessary and rationally desirable forms of coercion (the classic example is of course self-defense). If everybody had the potential to cause a global terminal disaster through some action X, would it really be unacceptable to institute some form of jointly agreed coercion that prevented people from doing X? It seems that even from very minimalist libertarian principles this would be OK. We might have serious practical concerns about how to actually implement it, but ethically it would be the right thing to set up the safeguard (unless the safeguard managed to be a cure worse than the illness, of course). [ Also, there is the discussion about how to handle lone holdouts - I'm not sure I agree with Nozik's solution in ASU or even whether it is applicable to the singleton issue, but let's ignore this headache for the time being. ] So unless you think there is no level of existential threat that can justify coordinated coercion, there exist *some* (potentially very high) level of threat where it makes sense. And clearly there are other lower levels where it does *not*. Somewhere in between there is a critical level where the threat does justify the coercion. The fact that we (or the future coercive system) do not know everything doesn't change things much, it just makes this decisionmaking under uncertainty. That is not an insurmountable obstacle. Just plug in your favorite model of decision theory and see what it tells you to do. It might be true that a galactic civilization has no existential threats and no need for enforcing global coordination. It might even be true for smaller civilizations. But I think this is a claim that needs to be justified based on risk assessment in the real world, not just a rejection a priori. > No singleton can have effective enough localized enough information feeds enabling it to outperform any/all more localized decision making systems. A singleton is by design a single point of failure. These are two good criticisms. The first works when talking about economics. However, it is not clear that a single/local agent will outperform the singleton. Who has the advantage likely depends on the technology involved and the relative power ratio: this is going to be different from case to case. Bans on nuke production is relatively easy to enforce, computer virus production isn't. The single point of failure is IMHO a much more deep problem. This is where I think singletons may be fatally flawed - our uncertainty in designing them correctly and the large consequences of mistakes *might* make them incoherent as xrisk-reduction means (if the risk from the singleton is too large, then it should not be used; however, this likely depends sensitively on your decision theory). To really say something about the permissibility and desirability of singletons we need to have: 1. A risk spectrum - what xrisks exist, how harmful they are, how likely they are, how uncertain we are about them. 2. An estimate of the costs of implementing a singleton that can deal with the risk spectrum, and our uncertainty about the costs. This includes an estimate of the xrisks from the singleton. 3. A decision theory, telling us how to weigh up these factors. Our moral theories will come in by setting the scale of the harms and costs (some moral theories also make claims about the proper decision theory, it seems). My claim in this post is that most reasonable moral theories will allow nonzero-cost singletons for sufficiently nasty risk spectra, and that it can be rational to implement one. I do not know whether our best xrisk estimates makes this look likely to be the case in the real world: we likely need to wait for Nick to finish his xrisk book before properly digging into it. -- Anders Sandberg Future of Humanity Institute Oxford University From bbenzai at yahoo.com Tue Jan 4 12:49:07 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 4 Jan 2011 04:49:07 -0800 (PST) Subject: [ExI] simulation as an improvement over reality. In-Reply-To: Message-ID: <1976.52184.qm@web114415.mail.gq1.yahoo.com> Eugen Leitl commented: > This is easy, why have so many people have such > troubles to get it? I think I know why. I think it's because dualistic thinking has such a stranglehold on our minds. It's deeply ingrained, as can be seen by the prevalence of superstition, especially religions, in all human populations. Many people will vehemently deny that they are thinking dualistically, then immediately go back to discussing the mysterious animus that can't be captured, described, transferred by ordinary physical processes, or even coherently thought about, in most cases. Someone coined the term "Crypto-Dualists" for such people, a while ago, and I think it's very appropriate. This mysterious animus goes by many names (as long as it's not "soul"), but it's evidently not possible for a person to inherit it from their self-of-a-while-ago, unless they are made of the same stuff as that previous self. And it can't be duplicated. And it is the sole thing which makes them them. (Heh. "Sole thing"). When told that it simply doesn't exist, that there's no need for it and that it goes against the observed facts and logic, they get upset, start talking about 'zombies' or start jumping up and down shouting "but it wouldn't be YOU!", and construct elaborate schemes to prove it (rather reminds me of Ptolemaic astronomy, or the contortions of theologians when confronted with common-sense). This despite the fact that they acknowledge that the 'zombies' or whatever they want to call them, would behave exactly the same as the 'original' would under the same circumstances, and would have exactly the same memories, and would in fact be the same person, in every detail (except for the little fact that they would not actually be the same person). I agree with the people who say that natural selection will decisively end the argument, and the supposed not-you zombies will be the ones who carry the torch of intelligence into the future. These zombies-that-are-not-you will be blissfully unaware of their woeful lack of.. world-lines or atomic continuity or whatever, and it won't make the slightest bit of difference to anyone or anything. I must admit that I had a bit of this crypto-dualism myself, until I read Linda Nagata's 'Vast'. I finally realised, after thinking about it for a while, that the pilot of the starship Null Boundary wasn't in fact committing suicide every 90 seconds at all, he was doing nothing more significant than what I do every morning when I wake up and can't remember the dream I was just having. Ben Zaiboc From spike66 at att.net Tue Jan 4 15:34:14 2011 From: spike66 at att.net (spike) Date: Tue, 4 Jan 2011 07:34:14 -0800 Subject: [ExI] Spacecraft (was MM) In-Reply-To: References: Message-ID: <004701cbac24$d824c680$886e5380$@att.net> On Behalf Of Keith Henson ... >...If it is too much trouble for the beam to hit the vehicle, then we know the ideal location and velocity profile we need to put the vehicle in orbit. Maybe we just sweep the beam without feedback and let the vehicle keep up with it. Given how fast hydrogen is moving through the vehicle's plumbing, it should be fairly easy to acquire the beam and keep it in the right spot...Keith That's a really interesting notion Keith. I have a hybrid notion of sorts: we do single axis control using all ground-based drive lasers, and the other axis of control is done by the spacecraft. The idea of sweeping in only one axis necessitates being on or very near the equator, so imagine one of those equatorial mountains as a base. We lift to 10k using solids or perhaps even air breathing recoverable propulsion, then use a sweeping single axis control (elevation only) on the ground firing due east, where the bird is responsible for moving in the north-south axis to stay in the beam. Another way to do this is to have a variable roll rate on the bird, then take advantage of asymmetric thrust of an ablative propulsion system to steer itself into the beam. I need to work on that idea some more. Thanks Keith, this is an interesting idea. spike From spike66 at att.net Tue Jan 4 16:08:29 2011 From: spike66 at att.net (spike) Date: Tue, 4 Jan 2011 08:08:29 -0800 Subject: [ExI] atheists declare religions as scams Message-ID: <005401cbac29$a062d7a0$e12886e0$@att.net> What surprises me is that they would include the one on the far right of the five shown in the graphic: http://www.christianpost.com/article/20110103/atheists-declare-religions-as- scams-in-new-ad/ Opposing the other four is harmless; taking on the one in the back row, far right can result in murder. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Jan 4 16:20:19 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 4 Jan 2011 11:20:19 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <20110103194304.GD16518@leitl.org> References: <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> <4D20AE4F.1090501@satx.rr.com> <62FDA911-88C6-4CF7-98C7-FED793891F4B@bellsouth.net> <4D221FD8.3000103@satx.rr.com> <20110103194304.GD16518@leitl.org> Message-ID: <2266AC4B-ADEA-48A7-8851-9BEFCD1B4A9F@bellsouth.net> >> >> Damien Broderick wrote: >> Read what I wrote. A vitrified brain is not a living brain. > Eugen Leitl wrote: > A vitrified brain is a snapshot, potentially enough to > resume the original process (you'll be dropping a few bits on > the floor, as short-term memory will not be consolidated, > so you'll lose at least a couple hours). Yes, but the machine built by Kenneth J. Hayworth doesn't use vitrified brains, it uses fresh wet squishy brains, and makes consistent slices of it 29.4 nanometers thick that are ready to be photographed with an electron microscope. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jan 4 17:27:29 2011 From: pharos at gmail.com (BillK) Date: Tue, 4 Jan 2011 17:27:29 +0000 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <2266AC4B-ADEA-48A7-8851-9BEFCD1B4A9F@bellsouth.net> References: <00B90463-FB4D-420C-BBC7-4677E4CB6EAF@bellsouth.net> <135888.23656.qm@web65615.mail.ac4.yahoo.com> <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> <4D20AE4F.1090501@satx.rr.com> <62FDA911-88C6-4CF7-98C7-FED793891F4B@bellsouth.net> <4D221FD8.3000103@satx.rr.com> <20110103194304.GD16518@leitl.org> <2266AC4B-ADEA-48A7-8851-9BEFCD1B4A9F@bellsouth.net> Message-ID: 2011/1/4 John Clark wrote:: > Yes, but the machine built by Kenneth J. Hayworth doesn't use vitrified > brains, it uses fresh wet squishy brains, and makes consistent slices of > it?29.4 nanometers thick that are ready to be photographed with an electron > microscope. > Fresh wet squishy brains! Yummy! But 29.4 nanometers slices are too thin for toasted sandwiches. BillK From atymes at gmail.com Tue Jan 4 17:25:05 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 4 Jan 2011 09:25:05 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: <005401cbac29$a062d7a0$e12886e0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> Message-ID: The atheists seem to be outweighing the theists in the comments. Also, depending on circumstances, openly going against any of the three in back could get you killed. (Granted, it's more famous for the one in the back right - though, the Middle East-North Africa region, where this is most common, only accounts for about 20% of worldwide Muslim population according to Wikipedia's sources.) I don't know if that's true of the two in front. 2011/1/4 spike : > > > What surprises me is that they would include the one on the far right of the > five shown in the graphic: > > > > http://www.christianpost.com/article/20110103/atheists-declare-religions-as-scams-in-new-ad/ > > > > Opposing the other four is harmless; taking on the one in the back row, far > right can result in murder. > > > > spike > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From thespike at satx.rr.com Tue Jan 4 18:21:32 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 04 Jan 2011 12:21:32 -0600 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <1976.52184.qm@web114415.mail.gq1.yahoo.com> References: <1976.52184.qm@web114415.mail.gq1.yahoo.com> Message-ID: <4D23652C.7030502@satx.rr.com> On 1/4/2011 6:49 AM, Ben Zaiboc wrote: > I agree with the people who say that natural selection will decisively end the argument, and the supposed not-you zombies will be the ones who carry the torch of intelligence into the future. Zombies? Zombies? What part of "NOBODY HAS EVER DENIED THIS! An exact copy of you MUST experience himself as you" didn't you understand? So you argue that brute reproduction is the only valid way to adjudicate the validity of a proposition or meme? Do remember that when the advocates of Mormonism, fundie Xianity and Islam outnumber those who carry the torch of your own favorite ideas. Oh wait, they already do, and the gap appears to be growing. Damien Broderick From spike66 at att.net Tue Jan 4 18:32:26 2011 From: spike66 at att.net (spike) Date: Tue, 4 Jan 2011 10:32:26 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> Message-ID: <007901cbac3d$bcf6e230$36e4a690$@att.net> ... >...Also, depending on circumstances, openly going against any of the three in back could get you killed. (Granted, it's more famous for the one in the back right - though, the Middle East-North Africa region, where this is most common, only accounts for about 20% of worldwide Mormon population according to Wikipedia's sources.) I don't know if that's true of the two in front... Ja, stuff like this happens all the time with the other two biggies: http://www.bbc.co.uk/news/world-south-asia-12111831 Not. These other five religions have a symbol, a cross, a six pointed star, crescent and star, and so forth. Agnostics and atheists should have a symbol too, so they can have something to carve on a gravestone, should they opt for one. For agnostics I propose a question mark. That was easy. Atheists are more difficult, but we could imagine the international no symbol (the circle with the diagonal) with nothing in it, so it's no nothing. Or perhaps a question mark inside the slash circle, meaning there's no question about it, there's no god. Or a circle with a partial line thru it with a gap in the diagonal, which means "Ask a hipster what that means," at which time you will learn it means there is something, just not a god or gods. Or just a simple capital A. I am told there are no atheists in foxholes, so perhaps we need a symbol meaning no foxholes, but I don't know how to draw that. Are there already symbols for agnostics and atheists? spike From jonkc at bellsouth.net Tue Jan 4 18:34:48 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 4 Jan 2011 13:34:48 -0500 Subject: [ExI] Asimov's 90th today In-Reply-To: <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> Message-ID: <21E9F234-D921-4C53-BC1E-B47C47586E0C@bellsouth.net> On page 302 of Isaac Asimov's autobiography " In Joy Still Felt" he describes the time he met Carl Sagan for the first time: "He [Carl Sagan] was a 27 year old handsome young man, tall, dark, articulate and absolutely incredibly intelligent. I had to add him to Marvin Minsky and thereafter I would say that they were two people who I would readily admit were more intelligent than I was. " John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Jan 4 18:38:50 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 4 Jan 2011 13:38:50 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D23652C.7030502@satx.rr.com> References: <1976.52184.qm@web114415.mail.gq1.yahoo.com> <4D23652C.7030502@satx.rr.com> Message-ID: On Jan 4, 2011, at 1:21 PM, Damien Broderick wrote: > An exact copy of you MUST experience himself as you" didn't you understand? Yes I understand that, and though it is immodest of me to say so, I understand that point a good deal better than you do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Jan 4 19:05:27 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 4 Jan 2011 11:05:27 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: <007901cbac3d$bcf6e230$36e4a690$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> Message-ID: On Tue, Jan 4, 2011 at 10:32 AM, spike wrote: >?Agnostics and atheists should have a > symbol too, so they can have something to carve on a gravestone, should they > opt for one. How about the simple lack of a symbol? Religions catalyze around their own banners, their own standards. We, on the other hand, just are, much like the universe. From dan_ust at yahoo.com Tue Jan 4 18:40:08 2011 From: dan_ust at yahoo.com (Dan) Date: Tue, 4 Jan 2011 10:40:08 -0800 (PST) Subject: [ExI] atheists declare religions as scams In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> Message-ID: <422091.7256.qm@web30106.mail.mud.yahoo.com> The Abrahamic faiths tend to less tolerant of opposing views overall. (If the Bible is any guide to history, wiping out people of opposing views happened and was condoned.) That said, though, toleration seems to be something that takes time to learn and institutionalize. I believe the EP crowd can offer up reasons for this: anything that's different has a good chance of being dangerous, so best to just not tolerate it as a default condition. But all of this said, we run into the problem of scriptural or even historical determinism. Just because some Muslims today don't tolerate ideological opposition to their beliefs - and by "not tolerate," I mean use violence not merely not be friends with -- doesn't mean all Muslims do. Regards, Dan From: Adrian Tymes To: ExI chat list Sent: Tue, January 4, 2011 12:25:05 PM Subject: Re: [ExI] atheists declare religions as scams The atheists seem to be outweighing the theists in the comments. Also, depending on circumstances, openly going against any of the three in back could get you killed.? (Granted, it's more famous for the one in the back right - though, the Middle East-North Africa region, where this is most common, only accounts for about 20% of worldwide Muslim population according to Wikipedia's sources.)? I don't know if that's true of the two in front. 2011/1/4 spike : > What surprises me is that they would include the one on the far right of the > five shown in the graphic: > >http://www.christianpost.com/article/20110103/atheists-declare-religions-as-scams-in-new-ad/ >/ > > Opposing the other four is harmless; taking on the one in the back row, far > right can result in murder. > > spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jan 4 19:14:40 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 04 Jan 2011 13:14:40 -0600 Subject: [ExI] Asimov's 90th today In-Reply-To: <21E9F234-D921-4C53-BC1E-B47C47586E0C@bellsouth.net> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> <21E9F234-D921-4C53-BC1E-B47C47586E0C@bellsouth.net> Message-ID: <4D2371A0.7000101@satx.rr.com> On 1/4/2011 12:34 PM, John Clark wrote: > On page 302 of Isaac Asimov's autobiography " In Joy Still Felt" he > describes the time he met Carl Sagan for the first time: > > > "He [Carl Sagan] was a 27 year old handsome young man, tall, dark, > articulate and absolutely incredibly intelligent. I had to add him to > Marvin Minsky and thereafter I would say that they were two people who I > would readily admit were more intelligent than I was. " I suspect it's in chapter 144 of *I. Asimov: A Memoir*, on Heinz Pagels, that he added Dr. P to that list. But I don't have the book handy, and Google won't display that text. Fwiw. Damien Broderick From sparge at gmail.com Tue Jan 4 18:52:38 2011 From: sparge at gmail.com (Dave Sill) Date: Tue, 4 Jan 2011 13:52:38 -0500 Subject: [ExI] atheists declare religions as scams In-Reply-To: <007901cbac3d$bcf6e230$36e4a690$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> Message-ID: On Tue, Jan 4, 2011 at 1:32 PM, spike wrote: > > > Are there already symbols for agnostics and atheists? Don't know, but for songs atheists have: http://www.youtube.com/watch?v=ADNesm6F27U -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Jan 4 19:16:02 2011 From: sparge at gmail.com (Dave Sill) Date: Tue, 4 Jan 2011 14:16:02 -0500 Subject: [ExI] atheists declare religions as scams In-Reply-To: <007901cbac3d$bcf6e230$36e4a690$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> Message-ID: On Tue, Jan 4, 2011 at 1:32 PM, spike wrote: > Are there already symbols for agnostics and atheists? http://www.religioustolerance.org/atheist6.htm The Darwin fish is the only one I've seen. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Jan 4 19:54:11 2011 From: spike66 at att.net (spike) Date: Tue, 4 Jan 2011 11:54:11 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> Message-ID: <001b01cbac49$28777aa0$79666fe0$@att.net> ... On Behalf Of Adrian Tymes Subject: Re: [ExI] atheists declare religions as scams On Tue, Jan 4, 2011 at 10:32 AM, spike wrote: >>?Agnostics and atheists should have a symbol too, so they can have something to carve on a gravestone, >>should they opt for one. >How about the simple lack of a symbol? Religions catalyze around their own banners, their own standards. We, on the other hand, just are, much like the universe. Ja, that mostly works for me. Of course then anyone who has no religious symbol gets quietly coopted by those whose belief is symbolized by nothing, the atheists. This may not be completely acceptable to the believers who choose no symbol. In this sense it would be analogous to those who orally stimulate their partner's testicles being involuntarily coopted by the much larger group of those who oppose big government spending, all as a result of their sharing the endearing term "teabaggers." The former group had the name first, but the latter group had it most. Regarding lack of symbol, I think you have it right however. spike From nymphomation at gmail.com Tue Jan 4 19:07:41 2011 From: nymphomation at gmail.com (*Nym*) Date: Tue, 4 Jan 2011 19:07:41 +0000 Subject: [ExI] atheists declare religions as scams In-Reply-To: <007901cbac3d$bcf6e230$36e4a690$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> Message-ID: On 4 January 2011 18:32, spike wrote: > > These other five religions have a symbol, a cross, a six pointed star, > crescent and star, and so forth. ?Agnostics and atheists should have a > symbol too, so they can have something to carve on a gravestone, should they > opt for one. ?For agnostics I propose a question mark. ?That was easy. > > Atheists are more difficult, but we could imagine the international no > symbol (the circle with the diagonal) with nothing in it, so it's no > nothing. ?Or perhaps a question mark inside the slash circle, meaning > there's no question about it, there's no god. ?Or a circle with a partial > line thru it with a gap in the diagonal, which means "Ask a hipster what > that means," at which time you will learn it means there is something, just > not a god or gods. ?Or just a simple capital A. ?I am told there are no > atheists in foxholes, so perhaps we need a symbol meaning no foxholes, but I > don't know how to draw that. > > Are there already symbols for agnostics and atheists? The only widely known one is the Atheists of America one, which is a bit complex as religious symbols go. Saw a few scarlet As when we protested the pope* in London last year with Dawkins and co.. See: http://www.religioustolerance.org/atheist6.htm * http://www.facebook.com/photo.php?pid=7081314&l=44a8af41ef&id=582689147 * http://www.facebook.com/photo.php?pid=7081316&l=0595a2bb1a&id=582689147 HTH Heavy splashings, Thee Nymphomation 'If you cannot afford an executioner, a duty executioner will be appointed to you free of charge by the court' From spike66 at att.net Tue Jan 4 20:07:43 2011 From: spike66 at att.net (spike) Date: Tue, 4 Jan 2011 12:07:43 -0800 Subject: [ExI] Asimov's 90th today In-Reply-To: <4D2371A0.7000101@satx.rr.com> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> <21E9F234-D921-4C53-BC1E-B47C47586E0C@bellsouth.net> <4D2371A0.7000101@satx.rr.com> Message-ID: <001c01cbac4b$0cca2490$265e6db0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick ... >> Marvin Minsky and thereafter I would say that they were two people who I would readily admit were more intelligent than I was. " >I suspect it's in chapter 144 of *I. Asimov: A Memoir*, on Heinz Pagels, that he added Dr. P to that list. But I don't have the book handy...Damien Broderick It isn't stated in those exact terms, but close. On page 470 of I.Asimov (ch. 144) Asimov comments: "Heinz Pagels was, in my opinion, the brightest of the shining lights who assembled at the Hugh Downs dinners. He also ran the Reality Club, a group of brilliant minds who gathered at roughly monthly intervals at various places in Manhattan to listen to talks on the borderlands of scholarship and to discuss what they heard. I was invited to join, but I have not attended regularly... Alan Guth gave a fascinating talk... I heard of the inflationary universe theory from Heinz..." When I read the passage just now in I.Asimov, I am struck by how cool it would have been to attend those meetings. So sad they were not recorded. Those videos would be priceless. spike From avantguardian2020 at yahoo.com Tue Jan 4 20:27:03 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Tue, 4 Jan 2011 12:27:03 -0800 (PST) Subject: [ExI] simulation as an improvement over reality. Message-ID: <626599.70341.qm@web65611.mail.ac4.yahoo.com> > >From: John Clark >To: ExI chat list >Sent: Sun, January 2, 2011 10:51:14 AM >Subject: Re: [ExI] simulation as an improvement over reality. > > > >On Jan 2, 2011, at 4:38 AM, The Avantguardian wrote: > >>I am not saying that there is something "missing" from the copy. I am saying >>that both the original and?the copies will have unique reference frames. ----------John wrote-------------------------------------------------------- In my thought experiment the two were not moving with respect to each other so I see absolutely ?nothing unique about their reference frames, and even if they were I'll be dammed if I can see why it would matter. And anyway I thought you said the copies were perfect -------------------------------------- > These reference frames will be physical in the sense that they will sweep > out?distinct?world lines in space-time ----------John wrote-------------------------------------------------------- Space-time lines of what,?Space-time lines of?every atom that was once part of your body including that atom you pissed down the toilet when you were in the third grade? ------------------------------------------- Yes, that atom's world line?orbited?a?mass of similar lines for some time before being pissed away.?That twisted?mass of world lines?was and is me. Atoms come, exchange partners, and go. Some do it quickly, some slowly, but still there is?a relatively stable pattern?of atomic world lines clustered around?my center of mass. Of course for simplicity, you can approximate me with a single fatter world line representing the average position of my atoms. > >Call it the autocentric sense, if you will ----------John wrote-------------------------------------------------------- Yet another euphemism for the soul. And please explain why this "autocentric sense" cannot be copied in a perfect copy. ----------------------------------------------------------------------------------- But nobody?who actually?believes in souls would think that I am describing anything remotely like a soul. And if the autocentric sense?is a soul then all GPS devices and other navigational instruments would have souls. Furthermore the sense *can* be copied but once it?is copied?it would?become non-identical.? For some items, perfect copies can't exist. To see why, imagine you have a perfect replicator that can replicate anything flawlessly and a perfect GPS unit that can measure it's own position with respect to the GPS satellite constellation with?indefinitely high?precision. Now imagine using the replicator on the GPS unit so that now you have two GPS units. Do the GPS?units read *exactly*?the same position? If not, the GPS devices are not perfect copies, since their readings are different. If they do, then one of the GPS devices is not functioning correctly because both can't be in the same place at the time. > The label "you" implies "over there". Me implies "here". > ----------John wrote-------------------------------------------------------- But as I have said before and will continue saying, if the two are identical and you exchange "here" for "over there" even the very universe itself will not notice any difference, and remember that both you standing here and that fellow over there are also part of the universe and you'd be no better detecting that exchange than any other part of the universe. And as I have also said before this is not just some skittering abstraction but the bedrock behind one of the most important ideas in modern physics, exchange forces.? ------------------------------ ? Exchange forces play a role in my argument too because they?mediate the Pauli Exclusion Principle that prevents fermions with identical quantum states from occupying the same position?in space. Because of this no two pieces of matter can occupy the exact same place at the exact same time, even if in all other respects they are identical. ? >You don't feel like?a different person?by?moving ?from one spatial coordinate to ?>another because the reference frame moves with you ----------John wrote-------------------------------------------------------- So if I give you general?anesthesia, put you on a jet to a undisclosed location and then wake you up?Stuart LaForge will be dead and there will just be an impostor who looks behaves thinks and believes with every fibre of his being that he is?Stuart LaForge -------------------------------------------------------------------- ? No because the autocentric sense is about *relative* positioning. It recalibrates wherever I happen to find myself after the anethesia wears off back?to being ground zero, the origin of my spatial map. ? >The autocentric sense does not track?your absolute position in space, there is >no such thing, but?your position relative to external objects including any >copies of you that may be around. > And regardless of your autocentric sense, you have a physical position and > associated reference frame relative to the fixed stars. ----------John wrote-------------------------------------------------------- Without your senses there is no way to even know where your brain is, so I sure don't see how it could have anything to do with consciousness or identity. For most of human history people thought the brain was an unimportant organ that had something to do with cooling the blood and the heart was the seat of consciousness; even though those ancient people literally didn't know where they were I still think they were conscious. -------------------------------------------------------------- ? But that is no accident, John. You wouldn't have a brain at all if it weren't for your senses. In the study of natural history, there is a distinct process called cephalization that is observed across phyla of increasing complexity. Animals that don't move much like sponges and anemones don't need?much in the way of senses and consequently don't need brains. As animals started moving, they developed senses like sight,?smell, and hearing.?The sense organs?were were concentrated?on the?leading portion of the body in the accustomed direction of movement, because?organisms needed to distinguish if they were moving?toward predators or other hazards. To void signal propagation delays in processing sensory information from these sense organs,?ganglia of nerve cells clustered immediately behind these sensory organs. These ganglia became the brain and the whole ensemble became?the head. ? But this feeds into my larger point, which is that the?autocentric sense is not the soul or some metaphysical bullshit but an evolved brain function that allows you to distinguish yourself from rivals, potential mates, and the predator trying to eat you.?? Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From moulton at moulton.com Tue Jan 4 20:18:44 2011 From: moulton at moulton.com (F. C. Moulton) Date: Tue, 04 Jan 2011 12:18:44 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> Message-ID: <4D2380A4.6010609@moulton.com> The following is a list by the USA Dept. Vet. Affairs http://www.cem.va.gov/hm/hmemb.asp From spike66 at att.net Tue Jan 4 21:32:39 2011 From: spike66 at att.net (spike) Date: Tue, 4 Jan 2011 13:32:39 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> Message-ID: <001201cbac56$ea0ce670$be26b350$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Dave Sill Subject: Re: [ExI] atheists declare religions as scams On Tue, Jan 4, 2011 at 1:32 PM, spike wrote: Are there already symbols for agnostics and atheists? http://www.religioustolerance.org/atheist6.htm The Darwin fish is the only one I've seen. -Dave Ja the Darwin fish is as close as we have. When those first showed up I supposed it could mean those who are cultural christians but who accept evolution as fact. This conflation might actually be refuted by a legged Darwin fish with a cross at the eye location. A christian evolutionist is not necessarily a contradiction: it would be one who recognizes that not all of christianity is bad or harmful. It certainly contains these elements, but these can be specifically refuted. For instance one can be a christian sex defender, can be one who believes death is final as all hell, yet still believes in doing good deeds at every opportunity and so forth. I see no contradiction in being a Darwinian christian atheist. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Jan 4 22:47:58 2011 From: spike66 at att.net (spike) Date: Tue, 4 Jan 2011 14:47:58 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: <4D2380A4.6010609@moulton.com> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <4D2380A4.6010609@moulton.com> Message-ID: <002901cbac61$6efd72a0$4cf857e0$@att.net> On Behalf Of F. C. Moulton The following is a list by the USA Dept. Vet. Affairs http://www.cem.va.gov/hm/hmemb.asp Excellent Fred, thanks. I propose we stop fighting those who insist cryonics is a religion, and just say OK fine, it's a religion rather than an advanced medical technique, and here is our logo: It has the self-deprecating humor angle in there, even though the head in the ice cube notion isn't quite accurate. I don't know how to draw liquid nitrogen. Such a logo could be carved on a headstone, if one chooses to have one. A cryonaut having a grave isn't as outlandish as it sounds: one may already have a cemetery plot before one ever hears of cryonics, and the family may wish to have it so. Everyone can win here: head entrusted to Max's staff, everything from the neck down goes into the ground, no problem, so long as they don't insist on an open casket funeral (eewww). Or actually they could still do an open casket for a cryonaut: have an artist make up a ceramic likeness based on how you look now, which is great compared to how you will look when your time comes, blunk that down on top of your severed neck, be the centerpiece for your funeral, all while your second favorite organ is safely preserved in the nitrogen dewar in Arizona. Jeez I need to stop posting, I feel just too weird today. I think it is that time of year. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.emz Type: application/octet-stream Size: 9014 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 7258 bytes Desc: not available URL: From rtomek at ceti.pl Wed Jan 5 02:04:59 2011 From: rtomek at ceti.pl (Tomasz Rola) Date: Wed, 5 Jan 2011 03:04:59 +0100 (CET) Subject: [ExI] atheists declare religions as scams In-Reply-To: <002901cbac61$6efd72a0$4cf857e0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <4D2380A4.6010609@moulton.com> <002901cbac61$6efd72a0$4cf857e0$@att.net> Message-ID: On Tue, 4 Jan 2011, spike wrote: > On Behalf Of F. C. Moulton > > The following is a list by the USA Dept. Vet. Affairs > > http://www.cem.va.gov/hm/hmemb.asp > > Excellent Fred, thanks. Yep. It would be much more interesting advert if it contained six faith symbols. And probably more honest :-). Pity atheists are too busy to include their own - it could have stood in the front, making nice triangle in effect... Unfortunately, as with any other marketing, gaining money is more important than researching the truth. This is what such ads really are. Just MHO. > A cryonaut having a grave isn't as outlandish as it sounds: one may already > have a cemetery plot before one ever hears of cryonics, and the family may > wish to have it so. Everyone can win here: head entrusted to Max's staff, > everything from the neck down goes into the ground, no problem, so long as > they don't insist on an open casket funeral (eewww). > > Or actually they could still do an open casket for a cryonaut: have an > artist make up a ceramic likeness based on how you look now, which is great > compared to how you will look when your time comes, blunk that down on top > of your severed neck, be the centerpiece for your funeral, all while your > second favorite organ is safely preserved in the nitrogen dewar in Arizona. Not a bad idea. I can add to it - the artist should give a cryonaut any head he or she would like. Yoda head, Darth Vader, Chewbacca... Why limit ourselves? There are plenty of big people out there, ancient deities, etc :-). Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From spike66 at att.net Wed Jan 5 03:26:32 2011 From: spike66 at att.net (spike) Date: Tue, 4 Jan 2011 19:26:32 -0800 Subject: [ExI] wordless negotiations: if force doesn't work, try bribery Message-ID: <001401cbac88$597216d0$0c564470$@att.net> Too bad we adults can't figure out how to negotiate peace deals as effectively as these two cousins. The 18 month old blonde has a great future as an ambassador: http://www.youtube.com/watch?v=3mqJniKHJpY spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Wed Jan 5 04:47:21 2011 From: moulton at moulton.com (F. C. Moulton) Date: Tue, 04 Jan 2011 20:47:21 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: <4D2380A4.6010609@moulton.com> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <4D2380A4.6010609@moulton.com> Message-ID: <4D23F7D9.1010303@moulton.com> > > http://www.cem.va.gov/hm/hmemb.asp > > In addition to the Atheist (#16) and Humanist (#32) it is interesting to note Wicca (#37) about which there was quite a bit of controversy a few years back. Fred From bbenzai at yahoo.com Wed Jan 5 09:37:16 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 5 Jan 2011 01:37:16 -0800 (PST) Subject: [ExI] atheists declare religions as scams In-Reply-To: Message-ID: <607430.58929.qm@web114417.mail.gq1.yahoo.com> "F. C. Moulton" wrote: > > The following is a list by the USA Dept. Vet. Affairs > > http://www.cem.va.gov/hm/hmemb.asp Hey, they missed out Satanism! http://en.wikipedia.org/wiki/Sigil_of_Baphomet Ben Zaiboc From anders at aleph.se Wed Jan 5 10:30:47 2011 From: anders at aleph.se (Anders Sandberg) Date: Wed, 05 Jan 2011 11:30:47 +0100 Subject: [ExI] Singletons In-Reply-To: <20110103181026.GB16518@leitl.org> References: <4D1BD7D2.5030403@aleph.se> <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> <20110101164211.GA16518@leitl.org> <4D22079E.2050201@aleph.se> <20110103181026.GB16518@leitl.org> Message-ID: <4D244857.2040404@aleph.se> On 2011-01-03 19:10, Eugen Leitl wrote: > On Mon, Jan 03, 2011 at 06:30:06PM +0100, Anders Sandberg wrote: > >> Which could be acceptable if the rules are acceptable. Imagine that > > The question is who makes the rules? Imagine a lowest common > denominator rule enforcer, using quorum of all people on this > planet. A very scary thought. Not entirely different from Eliezer's Coherent Extrapolated Volition. Although the idea is to be a bit more sophisticated than "lowest common denominator" (and this is of course where things rapidly become complex and interesting to philosophers, but tricky to implement). The singleton design problem and the friendly AI problem seem to be similar, maybe even identical. We want to define a structure that can be relied on to not misbehave even when expanded beyond the horizons we know when we design it. Singletons might not have to be superintelligent, although that is likely a desirable property of a singleton. My own favored answer to the friendly AI problem is that since the design part looks very hard and we know we can make reasonable stable and self constraining communities of minds (they are called societies), we should aim at that instead. But this presupposes that the "hard takeoff in a basement somewhere" scenario is unlikely. If we have reason to think that it might matter then we better get the friendliness working before it happens, prevent AI emergence or direct it towards safer forms. Similarly for singletons, if we think there is unlikely to be any threats worth the risk of singletons we can just let things coordinate themselves. But if we think there are serious threats around, then we better figure out how to make singletons, prevent the singleton-worthy threats somehow else, or make the threats non-singleton-worthy. In any case, figuring out how to figure out upcoming xrisks well seem to be a good idea. > Aargh. So the singleton can do whatever it wants by tweaking the > physical layer. I think that is the standard singleton. Scary enough, but then there is the motivational singleton (can control the motivations of all agents) and identity singleton (it is all agents). Controlling the physical substrate might be less powerful than controlling motivations. -- Anders Sandberg Future of Humanity Institute Oxford University From eugen at leitl.org Wed Jan 5 11:16:41 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 5 Jan 2011 12:16:41 +0100 Subject: [ExI] Singletons In-Reply-To: <4D244857.2040404@aleph.se> References: <4D1CB451.8000608@aleph.se> <20101230175810.GQ16518@leitl.org> <4D1DE91B.30705@aleph.se> <20101231145217.GI16518@leitl.org> <4D1F0ECF.2070409@aleph.se> <20110101164211.GA16518@leitl.org> <4D22079E.2050201@aleph.se> <20110103181026.GB16518@leitl.org> <4D244857.2040404@aleph.se> Message-ID: <20110105111641.GA16518@leitl.org> On Wed, Jan 05, 2011 at 11:30:47AM +0100, Anders Sandberg wrote: > Not entirely different from Eliezer's Coherent Extrapolated Volition. I have yet to see something in CEV worth criticizing. So far, it's a lot of vague handwaving. > Although the idea is to be a bit more sophisticated than "lowest common > denominator" (and this is of course where things rapidly become complex > and interesting to philosophers, but tricky to implement). As soon as things start becoming complex, we're pretty close to human design complexity ceiling. So you need to build a critical seed for hard takeoff (in order to create the initial capability assymmetry, and hence enforcement ability), yet you can't validate anything much about what exactly will happen almost immediately after. Danger, danger Will Robinson. > The singleton design problem and the friendly AI problem seem to be > similar, maybe even identical. We want to define a structure that can be I thought they were the very same exact idential thing. > relied on to not misbehave even when expanded beyond the horizons we > know when we design it. Singletons might not have to be > superintelligent, although that is likely a desirable property of a Singletons will be pretty much to be superintelligent, because they will be under constant, heavy attack by everyone with a spare neuron (unless said spare neurons are condidered to be too dangerous, and universal lobotomy is mandated along with arrest to self-enhancement through Darwinian evolution. Isn't it nice to be be almost omniscient and almost omnipotent?) > singleton. > > My own favored answer to the friendly AI problem is that since the > design part looks very hard and we know we can make reasonable stable > and self constraining communities of minds (they are called societies), > we should aim at that instead. But this presupposes that the "hard Yeah, pretty much so. > takeoff in a basement somewhere" scenario is unlikely. If we have reason Basement is unlikely, unless it's the basement of a massively black funded military project, which is attempting to preempt other said project. I do not think anything like that exists, though of course it would be difficult to identify, other than by tracing publications, monetary streams, and particular purchases. > to think that it might matter then we better get the friendliness > working before it happens, prevent AI emergence or direct it towards In order to get friendliness, you must first define friendliness. Then build an evolution constrainer, asserting conservation of that metric. > safer forms. Similarly for singletons, if we think there is unlikely to > be any threats worth the risk of singletons we can just let things > coordinate themselves. But if we think there are serious threats around, I think we should let things happen, while executing oversight about anything which could produce Blight. Like said military black projects. > then we better figure out how to make singletons, prevent the > singleton-worthy threats somehow else, or make the threats > non-singleton-worthy. In any case, figuring out how to figure out > upcoming xrisks well seem to be a good idea. We do not seem to be making much progress in that area in the last 20-30 years. Admittedly, almost nobody is working on it, but it might be it's a very hard problem. > >> Aargh. So the singleton can do whatever it wants by tweaking the >> physical layer. > > I think that is the standard singleton. Scary enough, but then there is > the motivational singleton (can control the motivations of all agents) If you can tweak the physical layer, you can tweak the motivations of all agents, since they're all operating at the same physical layer. This can be a very subtle effect, which cumulates over time. The eventual result is a twisted, warped, evil thing. > and identity singleton (it is all agents). Controlling the physical > substrate might be less powerful than controlling motivations. All computation is embodied, motivation is just a particular computation. Consider how parasites influence behaviour, the easiest way to implement the cop is the cop infesting your CNS, or a controlling nanoparasite network in every head, along with support infrastructure everywhere. It would, of course, induce selective autoagnosia in all its hosts. Sounds friendly enough yet? But we haven't even started. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Jan 5 12:05:57 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 5 Jan 2011 13:05:57 +0100 Subject: [ExI] Spacecraft (was MM) In-Reply-To: <004701cbac24$d824c680$886e5380$@att.net> References: <004701cbac24$d824c680$886e5380$@att.net> Message-ID: <20110105120557.GG16518@leitl.org> On Tue, Jan 04, 2011 at 07:34:14AM -0800, spike wrote: > That's a really interesting notion Keith. I have a hybrid notion of sorts: > we do single axis control using all ground-based drive lasers, and the other There's one advantage is you put the laser battery at 6 km height, just as your vehicle leaves the maglev: optical clarity of air. 3 g has acceleration with maglev launch prototypes has been demonstrated, so after mere 10 km you're well beyond Mach 2. Assuming you can track the vehicle for another 100 km after release, this still gives you a minute or so of extra burn. Instead of accelerating a 100 ton craft, you could probably cut the mass further, which would need less expensive maglev and less (you need at least 10^3) of these ~MW solid state lasers, and according tracking optic. (There's another advantage of the battery: you could use photonic sails, for orbits much above 100 km). I think the key is leave as much of the drive at home as possible. Maglev could save you the first stage, the laser could save you the second, so you're at the Holy Grail: one vehicle to LEO. Particularly, if you don't bother with controlled reentry, this can get very simple and cheap (but for the maglev and the laser stage, and enough photovoltaics and buffer capacity to power each shot, which could be once a day, or hourly, for that matter). > axis of control is done by the spacecraft. The idea of sweeping in only one > axis necessitates being on or very near the equator, so imagine one of those > equatorial mountains as a base. We lift to 10k using solids or perhaps even > air breathing recoverable propulsion, then use a sweeping single axis > control (elevation only) on the ground firing due east, where the bird is > responsible for moving in the north-south axis to stay in the beam. > > Another way to do this is to have a variable roll rate on the bird, then > take advantage of asymmetric thrust of an ablative propulsion system to > steer itself into the beam. I need to work on that idea some more. > > Thanks Keith, this is an interesting idea. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Wed Jan 5 12:11:39 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 5 Jan 2011 23:11:39 +1100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D23652C.7030502@satx.rr.com> References: <1976.52184.qm@web114415.mail.gq1.yahoo.com> <4D23652C.7030502@satx.rr.com> Message-ID: On Wed, Jan 5, 2011 at 5:21 AM, Damien Broderick wrote: > Zombies? Zombies? What part of "NOBODY HAS EVER DENIED THIS! An exact copy > of you MUST experience himself as you" didn't you understand? Although you acknowledge that you still think there is something special about one copy rather than the other, so that one is "you" and the other is not. It isn't that the "real you" has the same atoms, the same configuration, the same memories, continuity with the pre-copy version, or any other physical fact, since thought experiments can be devised showing up all these criteria as inadequate. It must therefore be some non-physical fact that makes you you and not a copy of you. -- Stathis Papaioannou From eugen at leitl.org Wed Jan 5 12:16:08 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 5 Jan 2011 13:16:08 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <2266AC4B-ADEA-48A7-8851-9BEFCD1B4A9F@bellsouth.net> References: <85700B2C-3F39-49CD-B711-F0E38170A0DC@mac.com> <20110101164437.GB16518@leitl.org> <4D1F98C4.50109@satx.rr.com> <20110102095000.GS16518@leitl.org> <4D20AE4F.1090501@satx.rr.com> <62FDA911-88C6-4CF7-98C7-FED793891F4B@bellsouth.net> <4D221FD8.3000103@satx.rr.com> <20110103194304.GD16518@leitl.org> <2266AC4B-ADEA-48A7-8851-9BEFCD1B4A9F@bellsouth.net> Message-ID: <20110105121608.GK16518@leitl.org> On Tue, Jan 04, 2011 at 11:20:19AM -0500, John Clark wrote: > >> > >> Damien Broderick wrote: > >> Read what I wrote. A vitrified brain is not a living brain. > > > Eugen Leitl wrote: > > A vitrified brain is a snapshot, potentially enough to > > resume the original process (you'll be dropping a few bits on > > the floor, as short-term memory will not be consolidated, > > so you'll lose at least a couple hours). > > Yes, but the machine built by Kenneth J. Hayworth doesn't > use vitrified brains, it uses fresh wet squishy brains, No, it uses fixated, resin-perfused brains. See http://www.depressedmetabolism.com/2010/01/28/brain-preservation/ and http://www.depressedmetabolism.com/chemopreservation-the-good-the-bad-and-the-ugly/ also http://www.depressedmetabolism.com/2010/08/09/ken-hayworth-on-straight-freezing-in-cryonics/ > and makes consistent slices of it 29.4 nanometers thick > that are ready to be photographed with an electron microscope. The method is not relevant, provided it works. You still drop bits on the floor, since long-term memory consolidation happens on hour scale, while begin of either process either assumes you're flat-EEGing, or will make you flat-EEG in a very short matter (if you have seen dogs hit with formaline, it is very quick). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From thespike at satx.rr.com Wed Jan 5 15:59:32 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 05 Jan 2011 09:59:32 -0600 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <1976.52184.qm@web114415.mail.gq1.yahoo.com> <4D23652C.7030502@satx.rr.com> Message-ID: <4D249564.2090208@satx.rr.com> On 1/5/2011 6:11 AM, Stathis Papaioannou wrote: > Although you acknowledge that you still think there is something > special about one copy rather than the other, so that one is "you" and > the other is not. The way you choose to express this reveals a confusion. As I've argued previously, the original (by definition) is NOT a "copy". It is an instance. The copy is also an instance, but it is a copied or emulated instance, not the original instance. Does this make any practical or legal or moral difference to either of them? That's a judgment call. If it's necessary to obliterate or disassemble the original instance in order to transport a snapshot of its configuration elsewhere in space or time, then build a copy emulating the original's functions, I see no stake for the original in this process. You and many others on this list disagree. John Clark tells us he'd do it in a heartbeat. Okay. There is a disagreement over what seems self-evident and there the discussion has to stop. Damien Broderick From dan_ust at yahoo.com Wed Jan 5 15:33:28 2011 From: dan_ust at yahoo.com (Dan) Date: Wed, 5 Jan 2011 07:33:28 -0800 (PST) Subject: [ExI] atheists declare religions as scams In-Reply-To: <4D2380A4.6010609@moulton.com> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <4D2380A4.6010609@moulton.com> Message-ID: <760838.67369.qm@web30108.mail.mud.yahoo.com> Nothing for Satanists? Did I miss something? Regards, Dan From: F. C. Moulton To: ExI chat list Sent: Tue, January 4, 2011 3:18:44 PM Subject: Re: [ExI] atheists declare religions as scams The following is a list by the USA Dept. Vet. Affairs http://www.cem.va.gov/hm/hmemb.asp _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 5 18:22:15 2011 From: spike66 at att.net (spike) Date: Wed, 5 Jan 2011 10:22:15 -0800 Subject: [ExI] atheists declare religions as scams In-Reply-To: <760838.67369.qm@web30108.mail.mud.yahoo.com> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <4D2380A4.6010609@moulton.com> <760838.67369.qm@web30108.mail.mud.yahoo.com> Message-ID: <000a01cbad05$7b3f5160$71bdf420$@att.net> http://www.cem.va.gov/hm/hmemb.asp >Nothing for Satanists? Did I miss something? Regards, Dan Dan check the list, you will see that number 43 is mysteriously missing. I have a theory: the Satanists hatched an evil satanic plot wherein they cast a spell to make their iniquitous logo invisible, thus co-opting all who choose no religious symbol, as wickedly symbolized by the mysteriously missing number 43. Diabolically clever was the wickedness of this evil plot, for now where there is nothing, that now represents the evil wicked religion of Satanism. Wicked evil is this. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Wed Jan 5 19:54:22 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 5 Jan 2011 12:54:22 -0700 Subject: [ExI] Meat v. Machine In-Reply-To: <000101cba905$4a2991c0$de7cb540$@att.net> References: <586924.64702.qm@web65615.mail.ac4.yahoo.com> <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <000101cba905$4a2991c0$de7cb540$@att.net> Message-ID: Been lovin' this thread. Something on topic may follow. For now a little political comment. On Fri, Dec 31, 2010 at 9:10 AM, spike wrote: >?I see a much bigger threat from rapidly > increasing populations of what SF writers > would call feral humans opposed to technology, Not to worry re "feral" humans. They can drag a culture down, but I doubt they will bring down all of humanity. Persons of high culture, intelligence, and (economic) power on the other hand, dangerous. Feral people form lynch mobs and start bar fights. Rich cultured folks build WMDs and start wars > while the population in advanced > civilization remains constant or > declines, while struggling to defend the > progress already made. Sounds like you're talking about the US, which may indeed crater -- "the impostume of too much wealth and pe...", well, too much wealth, then. The Chinese and Indians will take up the challenge,...have taken it up, Hoorah! "Progress" doesn't need nationalism, or any other form of emotionalism. > But I have hope, As do I, and I like rice as well as curry. Best, Jeff Davis "We're a band of higher primates stuck on the surface of an atmosphere-hazed dirtball. I can associate with that. I certainly can't identify with which patch of the dirtball I currently happen to be on, and which monkey tribe happens to reside therein. Only by taking the big view we can make it a common dream, and then a reality. It's worth it." Eugen Leitl From jrd1415 at gmail.com Wed Jan 5 23:57:15 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 5 Jan 2011 16:57:15 -0700 Subject: [ExI] Meat v. Machine In-Reply-To: <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> Message-ID: On Fri, Dec 31, 2010 at 3:35 PM, Samantha Atkins wrote: > As already discussed, this is not doable at lunar distances I take it you hold tele-operation at Lunar distances and beyond as not doable. I can accept that time lag may require something of a modified approach-- some adjustments -- but "not doable"? Don't be cruel. I have this notion of an army of mobile robots with exceedingly dexterous manipulators busily working away 24/7on the lunar surface; all of them "piloted" by ecstatic, pay-for-the-privilege, Earth-bound, ex-gamer geeks; and a wait-list of would-be "pilots", cash in hand, chompin' at the bit. If you could sign up to tele-operate a lunar robot, how much would you pay for the chance to go on that "ride"? Can you spell "Business Plan"? > and beyond until the remote systems are much more nearly autonomous. Please, please, not autonomous. Where's the fun in that? Autonomous sometimes, maybe, like when you have to sleep, or go to the john -- no, no; what was I thinking?; clearly, you can bring your laptop or "game controller" into the john with you. And for out at the 'riod belt, where time lag is, you know, substantial, well then you operate a whole passel of robots -- with, okay, some helpful autonomy, say with routine sequences -- you just do it serially. > My greatest source of concern right now is the economic implosion of the US dollar When the dollar 'implodes' -- ie is devalued radically, say 30-40% -- the result will bring the US back into the game. US exports will be competitive once again; Americans will work again; the constraints of austerity will make Americans sane and sensible again. They will ride bicycles a lot. Life will go on, in an orderly fashion even. Or not. > The economic hole the world has dug itself is the one I feel the most helpless to do much about or even get a really good idea of what should be done about it. ?I think we have pushed past the point where we can just stand back and let it fall down in a very unpleasant but non-catastrophic way. ?And we can't prop it up forever. Humans are both natural worriers and natural builders. Modern industrial productivity is huge. Humanity is capable of huge overproduction. I believe we've reached a point now where even a severe economic contraction will not bring us below the level of need (as distinguished from the level of 'want'). Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From thespike at satx.rr.com Thu Jan 6 00:18:40 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 05 Jan 2011 18:18:40 -0600 Subject: [ExI] Meat v. Machine In-Reply-To: References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> Message-ID: <4D250A60.30505@satx.rr.com> On 1/5/2011 5:57 PM, Jeff Davis wrote: > If you could sign up to tele-operate a lunar robot, how much would you > pay for the chance to go on that "ride"? How much would you pay (or how much time expend) to hack into the things and crash them into each other. Dodgem Cars! See them flying apart in slow motion! Damien Broderick From spike66 at att.net Thu Jan 6 00:33:41 2011 From: spike66 at att.net (spike) Date: Wed, 5 Jan 2011 16:33:41 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> Message-ID: <007101cbad39$5ede28f0$1c9a7ad0$@att.net> ... On Behalf Of Jeff Davis Subject: Re: [ExI] Meat v. Machine On Fri, Dec 31, 2010 at 3:35 PM, Samantha Atkins wrote: >> As already discussed, this is not doable at lunar distances >I have this notion of an army of mobile robots with exceedingly dexterous manipulators busily working away 24/7on the lunar surface; all of them "piloted" by ecstatic, pay-for-the-privilege, Earth-bound, ex-gamer geeks; and a wait-list of would-be "pilots", cash in hand, chompin' at the bit...If you could sign up to tele-operate a lunar robot, how much would you pay for the chance to go on that "ride"? Can you spell "Business Plan"?... Jeff, this is pure brilliance, me lad. Take the Tom Sawyer approach to painting that fence. >> My greatest source of concern right now is the economic implosion of the US dollar... Ja mine too. I have been thinking a lot about this lately, and trying to derive a reasonable strategery. >When the dollar 'implodes' -- ie is devalued radically, say 30-40% -- the result will bring the US back into the game. US exports will be competitive once again; Americans will work again; the constraints of austerity will make Americans sane and sensible again. They will ride bicycles a lot. Life will go on, in an orderly fashion even. Or not... I am betting on "or so." The US and much of Europe is suffering from what I call Drones Club syndrome, for those of you who are fellow Wooster and Jeeves fans. Wodehouse does such a good job of capturing the essence of 1920s Britain, where the lost generation of noble Brits were so long coddled, they eventually knew not how to actually do anything. One could set them in a field of ripe corn and hand them a pig, and they would starve. The Drones Club was filled with clueless ignoramuses who had plenty of money because of what their fathers and grandfathers did, but they themselves had no idea where wealth actually came from or why. They had good food, nice clothes, castles and the best of everything an industrialized nation could produce, yet not a vague clue. We are the drones. But we can be taught. All is not lost. Best, Jeff Davis Yes by all means, snip that. Far too pessimistic methinks. We should be able to write a good simulation of builder-bots operating 1.3 seconds away, and see what we could accomplish with that modest level of latency. Once one gets the rhythm, we will be able to do plenty of building with a 2.6 second round trip feedback loop. spike From msd001 at gmail.com Thu Jan 6 01:16:46 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 5 Jan 2011 20:16:46 -0500 Subject: [ExI] Meat v. Machine In-Reply-To: <007101cbad39$5ede28f0$1c9a7ad0$@att.net> References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> <007101cbad39$5ede28f0$1c9a7ad0$@att.net> Message-ID: On Wed, Jan 5, 2011 at 7:33 PM, spike wrote: > Yes by all means, snip that. ?Far too pessimistic methinks. ?We should be > able to write a good simulation of builder-bots operating 1.3 seconds away, > and see what we could accomplish with that modest level of latency. ?Once > one gets the rhythm, we will be able to do plenty of building with a 2.6 > second round trip feedback loop. I sometimes have more than 2.6 seconds of latency using remote desktop software to control my work computer from home. Sometimes I have more than 1.3 seconds of latency using my work desktop when I'm actually AT work. Ask windows to browse (via GUI) at multi-thousand file directory and see how many "sweeps" of that silly flashlight it takes before you see the first file... With semi-intelligent (or at least not-too-dumb) of a robot a fairly long latency could be tolerable. I'm pretty sure a Roomba vacuum has enough smarts to not fall down stairs or kill the housepets - and for a few dollars more we could probably make a dumptruck version to move ready-to-haul regolith around the moon. Submarines have similar latencies too, don't they? every action must fight through water, then wait for debris to settle or be swept away before a remote operator can orient to the new status (even when the machine is capable of greater responsiveness, the operator may not be) From spike66 at att.net Thu Jan 6 01:43:05 2011 From: spike66 at att.net (spike) Date: Wed, 5 Jan 2011 17:43:05 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> <007101cbad39$5ede28f0$1c9a7ad0$@att.net> Message-ID: <009101cbad43$10e5c9a0$32b15ce0$@att.net> ... On Behalf Of Mike Dougherty ... >...With semi-intelligent (or at least not-too-dumb) of a robot a fairly long latency could be tolerable. I'm pretty sure a Roomba vacuum has enough smarts to not fall down stairs or kill the housepets - and for a few dollars more we could probably make a dumptruck version to move ready-to-haul regolith around the moon... The problems of dealing with 2.6 second feedback is small compared to the problems presented with trying to keep apes alive and satisfied on the moon I would think. There is *plenty* we could build under those circumstances. After looking at all the alternatives, I have concluded that most of our big space stuff in the future will need to be built on the moon out of moon stuff and hurled out of that relatively shallow gravity well. We aren't that close to having machines that can build elaborate stuff however. Speaking of moon, how many caught that new crescent this evening. For US west coasters there is still time to see it. The conditions this evening were perfect: crystal clear and not a breath of wind. Since there was an eclipse yesterday, I knew exactly where to look for it. For something really cool, check this, the moon and the space station eclipsing the sun at the same time: http://www.astrophoto.fr/ spike From stathisp at gmail.com Thu Jan 6 06:32:59 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 6 Jan 2011 17:32:59 +1100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <4D249564.2090208@satx.rr.com> References: <1976.52184.qm@web114415.mail.gq1.yahoo.com> <4D23652C.7030502@satx.rr.com> <4D249564.2090208@satx.rr.com> Message-ID: On Thu, Jan 6, 2011 at 2:59 AM, Damien Broderick wrote: > On 1/5/2011 6:11 AM, Stathis Papaioannou wrote: > >> Although you acknowledge that you still think there is something >> special about one copy rather than the other, so that one is "you" and >> the other is not. > > The way you choose to express this reveals a confusion. As I've argued > previously, the original (by definition) is NOT a "copy". It is an instance. > The copy is also an instance, but it is a copied or emulated instance, not > the original instance. Does this make any practical or legal or moral > difference to either of them? That's a judgment call. If it's necessary to > obliterate or disassemble the original instance in order to transport a > snapshot of its configuration elsewhere in space or time, then build a copy > emulating the original's functions, I see no stake for the original in this > process. You and many others on this list disagree. John Clark tells us he'd > do it in a heartbeat. Okay. There is a disagreement over what seems > self-evident and there the discussion has to stop. The "original" could be defined as being a copy since its composition and structure changes over time, but you choose not to define it as such. The reasoning here deserves close analysis. At first glance it may appear that you consider it self-evident that in ordinary life you are the original rather than the copy, and therefore that you survive as the same person from moment. But I think that the actual sequence of reasoning is as follows: you consider it as self-evident that you survive as the same person from moment to moment, and therefore conclude from this that you must be the original and not a copy. So whatever information is presented to you as evidence you are *not* the original is dismissed by ad hoc adjustment of the definition of what a copy is. That is, instead of saying that you survive because a copy lives on with your mental qualities you prefer to say that since you self-evidently survive in ordinary life you can't really be a copy. This would not be so problematic if applied consistently, but it is not. In the case of destructive teleportation the sequence of reasoning is reversed: the self-evident belief is that you are a copy, so it follows that the belief that you have survived must be false. -- Stathis Papaioannou From possiblepaths2050 at gmail.com Thu Jan 6 07:39:38 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 6 Jan 2011 00:39:38 -0700 Subject: [ExI] A better option than fish oil? Message-ID: A better means of getting Omega-3's? http://krilloil.mercola.com/krill-oil.html John From stathisp at gmail.com Thu Jan 6 09:36:41 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 6 Jan 2011 20:36:41 +1100 Subject: [ExI] simulation as an improvement over reality In-Reply-To: <88714.4120.qm@web114401.mail.gq1.yahoo.com> References: <88714.4120.qm@web114401.mail.gq1.yahoo.com> Message-ID: On Tue, Jan 4, 2011 at 12:32 PM, Ben Zaiboc wrote: > Your argument is the same as the old one about a person who, after being copied, should be quite happy to shoot himself. ?Naturally that is silly. ?Nobody would be happy to shoot themselves, regardless of how many identical copies of them were in existence. Unless the identical copies are in lockstep and shooting oneself will leave at least one of the copies running. In that case, the stream of consciousness of the terminated copies would continue uninterrupted. -- Stathis Papaioannou From possiblepaths2050 at gmail.com Thu Jan 6 09:47:23 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 6 Jan 2011 02:47:23 -0700 Subject: [ExI] 55 upcoming science fiction/fantasy films Message-ID: I am a huge science fiction/fantasy film buff, and so I thought I would share this io9 listing of *55* upcoming 2011 films! There looks to be some gems here, and of course, also some real clunkers to be uncovered when they are released. http://io9.com/5723075/55-science-fictionfantasy-movies-to-watch-out-for-in-2011 John : ) From eugen at leitl.org Thu Jan 6 10:03:05 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 6 Jan 2011 11:03:05 +0100 Subject: [ExI] Meat v. Machine In-Reply-To: <4D250A60.30505@satx.rr.com> References: <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> <4D250A60.30505@satx.rr.com> Message-ID: <20110106100305.GV16518@leitl.org> On Wed, Jan 05, 2011 at 06:18:40PM -0600, Damien Broderick wrote: > On 1/5/2011 5:57 PM, Jeff Davis wrote: > >> If you could sign up to tele-operate a lunar robot, how much would you >> pay for the chance to go on that "ride"? > > How much would you pay (or how much time expend) to hack into the things > and crash them into each other. Dodgem Cars! See them flying apart in > slow motion! There's no need to run these on public networks. And of course you would staff the control centers with trained professionals (some recruited from the gamer circles, just as today's drone pilots). It would be perfectly feasible to make a lunar game world in SecondLife, OpenSim or OpenCroquet, with the 2.5 s relativistic latency added via FIFO buffers. I presume the robots will be pretty small, something which would fit into a shoebox or maybe as big as a small golf cart. Wheeled chassis would be quite appropriate for many locations. For other places one could use something like a Big Dog (needs lots more power, and the many joints would be a problem). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Jan 6 10:10:20 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 6 Jan 2011 11:10:20 +0100 Subject: [ExI] Meat v. Machine In-Reply-To: <007101cbad39$5ede28f0$1c9a7ad0$@att.net> References: <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> <007101cbad39$5ede28f0$1c9a7ad0$@att.net> Message-ID: <20110106101020.GY16518@leitl.org> On Wed, Jan 05, 2011 at 04:33:41PM -0800, spike wrote: > Yes by all means, snip that. Far too pessimistic methinks. We should be > able to write a good simulation of builder-bots operating 1.3 seconds away, You don't need to write anything, other than adding a delay in the control flow and the video feed in current virtual world simulators. For a lark, you could even add 1/6 g to the physics, though that is not strictly required. It would be possible to build local reflexes with avatar scripting. > and see what we could accomplish with that modest level of latency. Once > one gets the rhythm, we will be able to do plenty of building with a 2.6 > second round trip feedback loop. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From scerir at alice.it Thu Jan 6 11:11:46 2011 From: scerir at alice.it (scerir) Date: Thu, 6 Jan 2011 12:11:46 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <1976.52184.qm@web114415.mail.gq1.yahoo.com><4D23652C.7030502@satx.rr.com><4D249564.2090208@satx.rr.com> Message-ID: <7E20D3E251624AA391C6232256F0B21E@PCserafino> Stathis writes: > That is, instead of saying that you survive because a copy lives > on with your mental qualities you [Damien] prefer to say that since > you self-evidently survive in ordinary life you can't really be a copy. > This would not be so problematic if applied consistently, but it is > not. In the case of destructive teleportation the sequence of > reasoning is reversed: the self-evident belief is that you are a copy, > so it follows that the belief that you have survived must be false. By "destructive teleportation" do you mean quantum teleportation? Asking this because, in this case, the quantum state of the original would be destroyed, but not the "meat" (and his "consciousness"?) and recreated, at a distance, provided there is enough "meat" there. s. From spike66 at att.net Thu Jan 6 14:36:21 2011 From: spike66 at att.net (spike) Date: Thu, 6 Jan 2011 06:36:21 -0800 Subject: [ExI] simulation as an improvement over reality References: <88714.4120.qm@web114401.mail.gq1.yahoo.com> Message-ID: <001301cbadaf$168247f0$4386d7d0$@att.net> Subject: RE: [ExI] simulation as an improvement over reality ... >> Your argument is the same as the old one about a person who, after being copied, should be quite happy to shoot himself. Naturally that is silly...Stathis Papaioannou >No way. You shoot the copy of you. Then at your trial you argue he shot himself. spike If you are lucky, the court rules you innocent by reason of justifiable suicide. You shot yourself in self-defense, before you could shoot you. Come now, do we not already know the identity debates always eventually devolve to silliness? spike From pharos at gmail.com Thu Jan 6 14:43:15 2011 From: pharos at gmail.com (BillK) Date: Thu, 6 Jan 2011 14:43:15 +0000 Subject: [ExI] NYT reports criticisms of Precognition article Message-ID: Journal?s Paper on ESP Expected to Prompt Outrage By BENEDICT CAREY Published: January 5, 2011 One of psychology?s most respected journals has agreed to publish a paper presenting what its author describes as strong evidence for extrasensory perception, the ability to sense future events. The decision may delight believers in so-called paranormal events, but it is already mortifying scientists. Advance copies of the paper, to be published this year in The Journal of Personality and Social Psychology, have circulated widely among psychological researchers in recent weeks and have generated a mixture of amusement and scorn. --------------------------- The main criticisms seem to point at the statistical analysis as being inadequate: Quote: Many statisticians say that conventional social-science techniques for analyzing data make an assumption that is disingenuous and ultimately self-deceiving: that researchers know nothing about the probability of the so-called null hypothesis. In this case, the null hypothesis would be that ESP does not exist. Refusing to give that hypothesis weight makes no sense, these experts say. ------------------------------ BillK From spike66 at att.net Thu Jan 6 14:30:59 2011 From: spike66 at att.net (spike) Date: Thu, 6 Jan 2011 06:30:59 -0800 Subject: [ExI] simulation as an improvement over reality In-Reply-To: References: <88714.4120.qm@web114401.mail.gq1.yahoo.com> Message-ID: <001201cbadae$564888a0$02d999e0$@att.net> ... > Your argument is the same as the old one about a person who, after being copied, should be quite happy to shoot himself. Naturally that is silly...Stathis Papaioannou No way. You shoot the copy of you. Then at your trial you argue he shot himself. spike From jonkc at bellsouth.net Thu Jan 6 15:09:41 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 6 Jan 2011 10:09:41 -0500 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <626599.70341.qm@web65611.mail.ac4.yahoo.com> References: <626599.70341.qm@web65611.mail.ac4.yahoo.com> Message-ID: <21F54F4B-2098-4BEC-BD11-0366D0A92014@bellsouth.net> On Jan 4, 2011, at 3:27 PM, The Avantguardian wrote: >> Me: >> Space-time lines of what, Space-time lines of every atom that was once part of >> your body including that atom you pissed down the toilet when you were in the >> third grade? > > Yes, that atom's world line orbited a mass of similar lines for some time before > being pissed away. That twisted mass of world lines was and is me. That can't be because the spacetime world lines (a fancy way of saying history) of most things has been erased from the universe; and this has nothing to do with the Heisenberg uncertainty principle, it's in addition to it, and it would be true even if you had a magic computer that could instantly perform an infinite number of calculations. Even if the universe were completely deterministic and even if you knew the exact state of the universe as it is right now and even if you had unlimited computational resources at your disposal you still couldn't figure out the complete history of the universe because the same outcome could have been produced in more than one way. Arithmetic is certainly deterministic but if you knew that two positive integers were added and the result was 6 you wouldn't know what those two integers were, they might have been 3 and 3, or 4 and 2, or 5 and 1. If something no longer exists in the cosmos I don't see how it could be the key to anything, much less identity. > Atoms come, exchange partners, and go. Some do it quickly, some slowly, but still there is a relatively stable pattern of atomic world lines Then you also have to assume that one time scale is unique and has properties no other one has, and science can find no evidence of that. On the time scale common in subatomic particles the pattern is indeed extraordinarily stable, but on a geological or astronomical time scale it is extraordinarily ephemeral. >> Yet another euphemism for the soul. And please explain why this "autocentric >> sense" cannot be copied in a perfect copy. > > But nobody who actually believes in souls would think that I am describing anything remotely like a soul. Furthermore the sense *can* be copied but once it is copied it would become non-identical. If the copy is non-identical then there is something in the "autocentric sense" that can not be copied, something of ENORMOUS significance that nevertheless cannot be detected by the Scientific Method. There is a word in the English language for something like that and it begins with the letter "s". > For some items, perfect copies can't exist. You seem to have switched tactics, from saying that a perfect copy wouldn't be you to denying it could exist in the first place; but it doesn't matter, the perfect verses the almost perfect dichotomy cannot be the key to identity because otherwise I'd become a different person every time I took a sip of coffee. I don't think I do become a different person, or if I do then I can only conclude that becoming a different person doesn't matter very much. > imagine you have a perfect replicator that can replicate anything flawlessly and a perfect GPS unit > that can measure it's own position with respect to the GPS satellite constellation with indefinitely high precision. Now imagine using the replicator on the GPS unit so that now you have two GPS units. Do the GPS units read *exactly* the same position? No the GPS units do not read exactly the same position and the reason they do not is due to different environmental conditions that renders them no longer identical, they will have diverged; and if you are being led to a torture chamber and your exact copy is not then the two of you are no longer exact either, you well have diverged. > Exchange forces play a role in my argument too because they mediate the Pauli Exclusion Principle that prevents fermions with identical quantum states from occupying the same position in space. The Pauli Exclusion Principle can be derived by considering 2 identical fermions in different locations and assuming that position is not a unique property of either one so an exchange of the two would not change the universe in any way. The Pauli Exclusion Principle has been observed experimentally proving that the assumption was correct. And you can make a similar deduction concerning 2 identical bosons and conclude that they CAN occupying the same position in space; and again this has been experimentally confirmed to be true. The theory that position is the key to identity just doesn't hold water. >> So if I give you general anesthesia, put you on a jet to a undisclosed location >> and then wake you up Stuart LaForge will be dead and there will just be an >> impostor who looks behaves thinks and believes with every fibre of his being >> that he is Stuart LaForge > > No because the autocentric sense is about *relative* positioning. It > recalibrates wherever I happen to find myself after the anethesia wears off > back to being ground zero, the origin of my spatial map. If the "autocentric sense" recalibrates back to zero and you still feel like you then obviously it has nothing to do with the sense of self; what it actually does have to do with is not clear to me. > As animals started moving, they developed senses like sight, smell, and hearing. The sense organs were were concentrated on the leading portion of the body in the accustomed direction of movement, because organisms needed to distinguish if they were moving toward predators or other hazards. To void signal propagation delays in processing sensory information from these sense organs, ganglia of nerve cells clustered immediately behind these sensory organs. Yes, and so these animals felt they were where their sense organs were, and that just happened to be close to where there brain was. But that detail wasn't caused by anything fundamental, just a evolutionary whim and the fact that nerve impulses move very slowly. Light moves about 5 million times as fast as nerve impulses, so your brain could be on the other side of the world but you'd still feel you were where your sense organs were. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Thu Jan 6 15:56:27 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Thu, 6 Jan 2011 08:56:27 -0700 Subject: [ExI] Meat v. Machine In-Reply-To: <4D250A60.30505@satx.rr.com> References: <20101229093416.GY16518@leitl.org> <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> <4D250A60.30505@satx.rr.com> Message-ID: On Wed, Jan 5, 2011 at 5:18 PM, Damien Broderick wrote: > On 1/5/2011 5:57 PM, Jeff Davis wrote: > >> If you could sign up to tele-operate a lunar robot, how much would you >> pay for the chance to go on that "ride"? > > How much would you pay (or how much time expend) to hack into the things and > crash them into each other. Dodgem Cars! See them flying apart in slow > motion! Coming soon to ESPN's Intergalactic Extreme Sports, "Robot Wars on the Moon". You too can be an Intergalactic Robot Warrior. Sign up for the first round of eliminations to be held in an Earthly lunar landscape near your -- file your "pilot's" application now and reserve your front row seat on Pay per View! More action, more diversity, more robust designs, more mayhem, more fun. You're not gonna like it, you're gonna LOVE it. The Rumble on the Regolith! Don't miss it! Whatever maximizes ROI. Capitalism thrives on creativity. Are we havin' fun yet? Best, Jeff Davis "We call someone insane who does not believe as we do to an outrageous extent." Charles McCabe From jonkc at bellsouth.net Thu Jan 6 15:34:00 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 6 Jan 2011 10:34:00 -0500 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <001201cbac56$ea0ce670$be26b350$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> Message-ID: <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> On Jan 4, 2011, at 4:32 PM, spike wrote: > A christian evolutionist is not necessarily a contradiction There are christian evolutionists but it is a contradiction, they just live with it the same way all religious people deal with the absurdity of their beliefs, by putting their ideas in little air tight compartments with no way for them to interact with each other. Christians believe in a benevolent God who can do anything, so He could have produced the complexity of our world by just snapping his metaphorical fingers but instead he used Evolution, a hideously cruel process. That is FAR more evil than anything Satan did in the Bible. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Jan 6 15:41:41 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 6 Jan 2011 16:41:41 +0100 Subject: [ExI] An old text on "singularitarianism"... Message-ID: ... has re-emerged on the Associazione Italiana Transumanisti's mailing list, which I believe expresses a rather authoritative POV on issues recently discussed again and again in this list concerning possible AGI-related "rapture" and "doom" visions. <> -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Jan 6 16:43:19 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 06 Jan 2011 10:43:19 -0600 Subject: [ExI] NYT reports criticisms of Precognition article In-Reply-To: References: Message-ID: <4D25F127.5020802@satx.rr.com> On 1/6/2011 8:43 AM, BillK wrote: > Quote: > Many statisticians say that conventional social-science techniques for > analyzing data make an assumption that is disingenuous and ultimately > self-deceiving: that researchers know nothing about the probability of > the so-called null hypothesis. > > In this case, the null hypothesis would be that ESP does not exist. > Refusing to give that hypothesis weight makes no sense, these experts > say. > ------------------------------ Exactly. Since we know "radio-activity" does not exist, Madam Curie, it follows that your experiment is meaningless and foolish. Why, if these magical "rays" were part of the world, they would have been known since Aristotle; gamblers in casinos would have used them to see through the backs of their opponents' cards! Yet we know that Aristotle said nothing about such an absurdity, and casinos thrive. Trust Bayes and your prejudices over empirical data every time! Damien Broderick From spike66 at att.net Thu Jan 6 16:57:37 2011 From: spike66 at att.net (spike) Date: Thu, 6 Jan 2011 08:57:37 -0800 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> Message-ID: <004b01cbadc2$d2677f90$77367eb0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Subject: Re: [ExI] atheists declare religions as scams. On Jan 4, 2011, at 4:32 PM, spike wrote: A christian evolutionist is not necessarily a contradiction >There are christian evolutionists but it is a contradiction, they just live with it the same way all religious people deal with the absurdity of their beliefs, by putting their ideas in little air tight compartments with no way for them to interact with each other. Christians believe in a benevolent God who can do anything, so He could have produced the complexity of our world by just snapping his metaphorical fingers but instead he used Evolution, a hideously cruel process. That is FAR more evil than anything Satan did in the Bible. John K Clark John, the way I am using the term christian (with lower case c) implies atheist. The term is more of an adjective. So in this sense a christian evolutionist is one who is generally comfortable with the culture that tends to form in societies where upper case C Christians live, can deal with their superstitions and so forth. A lower case c christian can be one who was once an upper case C, but was crushed beneath the overwhelming weight of evidence for evolution, and who were then forced by the iron grip of reason to follow that observation to its logical conclusion. We were convinced against our will that there is no blissful afterlife, no 73 virgins waiting our arrival, at best a future as a sim in a holodeck, at worst, nothing. Outside observers could scarcely tell there was a profound phase change occurring in the mind of that former upper case C believer as she reasoned through the consequences of unambiguous observation. Other than that unfortunate incident Mark 11:15-19, Jesus really wasn't a bad guy. He is lucky the moneychangers didn't gang up and beat the holy shit out of the anti-capitalist bastard. But other than that, he was a good lad. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Jan 6 17:35:37 2011 From: pharos at gmail.com (BillK) Date: Thu, 6 Jan 2011 17:35:37 +0000 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <004b01cbadc2$d2677f90$77367eb0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <004b01cbadc2$d2677f90$77367eb0$@att.net> Message-ID: 2011/1/6 spike wrote: > Other than that unfortunate incident Mark 11:15-19, Jesus really wasn?t a > bad guy.? He is lucky the moneychangers didn?t gang up and beat the holy > shit out of the anti-capitalist bastard.? But other than that, he was a good > lad. > > You think that because you've only read his supporters stories. You should have read what the moneychangers wrote! BillK From sjatkins at mac.com Thu Jan 6 17:41:25 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 06 Jan 2011 09:41:25 -0800 Subject: [ExI] Asimov's 90th today In-Reply-To: <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> References: <201101030033.p030XVOE001258@andromeda.ziaspace.com> <002a01cbaae2$a099dcc0$e1cd9640$@att.net> <201101030158.p031wnnK002458@andromeda.ziaspace.com> <9A815D00-A778-48F0-9D2C-988B5F0E9B64@bellsouth.net> Message-ID: <1C91571C-0769-4445-A841-5F1149D4CFC6@mac.com> On Jan 3, 2011, at 10:39 AM, John Clark wrote: > On Jan 2, 2011, at 8:58 PM, David Lubkin wrote: > >> Fred Pohl's best friend was Isaac > > True. > >> one of the two people he would admit was smarter than he. (The other was Minsky.) > > Actually no, in his autobiography he said that both he and Pohl had IQ tests, I don't remember the exact numbers but both were in the upper 150's, however Asimov beat Pohl by one point. The only two people that Asimov had ever met that he thought were smarter than him were Martin Minsky and Carl Sagan. I have met one or two people that were so brilliant and or seemingly differently wired that I have trouble to this day believing they were within human range. I watched one write two quite non-trivial programs, one with each hand, while discussing a fairly substantial technical issue orthogonal to either one. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Jan 6 17:42:17 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 6 Jan 2011 09:42:17 -0800 Subject: [ExI] A better option than fish oil? In-Reply-To: References: Message-ID: Anything where the person pitching it claims that big organizations "don't want you to know about" it, is almost always lying. Rather, the big organizations - if they are aware of it at all - have duly analyzed it and found that it is less useful to customers (i.e., us), at the price point it could be manufactured at, than their current product is at its price point. (If you get benefit X from $Y worth of the brand name product, and this alternative would give you 2*X benefit for $10*Y, most customers will simply buy twice as much of the brand name product. Granted, the math is usually not this simple - diminishing returns, and so on - but the full analysis amounts to the same conclusion.) Often times, this analysis will be filed away on the heap of failed research any sufficiently large organization undertakes, and most employees of the company will not be specifically aware of this particular failure (because they have had no reason to spend their time reading about it). This happens often enough, that it is a safe conclusion to make simply from the observation that this alternative is being promoted, by its maker, as something its big competitors "don't want you to know about". If it were truly, provably, superior, a better pitch would be to headline the provable superiority - and if the maker's marketers are competent, they will know this. Therefore, either they are incompetent (which lends suspicion about the technical accuracy of claims made) or they know it is not in fact provably superior. On Wed, Jan 5, 2011 at 11:39 PM, John Grigg wrote: > A better means of getting Omega-3's? > > http://krilloil.mercola.com/krill-oil.html > > John > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Thu Jan 6 17:46:00 2011 From: spike66 at att.net (spike) Date: Thu, 6 Jan 2011 09:46:00 -0800 Subject: [ExI] atheists declare religions as scams. References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> Message-ID: <005701cbadc9$952ff150$bf8fd3f0$@att.net> . >.So in this sense a christian evolutionist is one who is generally comfortable with the culture that tends to form in societies where upper case C Christians live, can deal with their superstitions and so forth. Regarding cultural christians, John where would you rather live, if you had to choose one: Calcutta, Tehran, or Salt Lake City? No you wouldn't, you know SLC is where you'd rather be, even if it's not your favorite place. >.Other than that unfortunate incident Mark 11:15-19, Jesus really wasn't a bad guy. He is lucky the moneychangers didn't gang up and beat the holy shit out of the anti-capitalist bastard. But other than that, he was a good lad.spike I may need to explain that comment to the religion non-hipsters. Jesus went into the temple and overturned the tables of the moneychangers. He should have been encouraging the moneychangers. My reasoning goes like this. In the old days, guys went to the temple once a year with a beast of some sort to slay as they repented of their sins. They ritually transferred their sins to the beast, then slew it. It's where we get the term scapegoat. I swear I am not making up this. So imagine a guy (it was only the man who did this, since he was given license to repent *for his family* (not kidding)) decided he hadn't been such an evil lad this year, so he chooses a lamb, but then his wives remind him what a bastard he was, better take a full grown goat. So off he goes, but he thinks back and decides he really wasn't so bad, so now the smallest he has is this expensive beast, so the logical thing to do is to trade it in, and so hey buddy can you break a goat? Sure lambs and rabbits are fine. So this creates a market for money changers and beast changers. So after they break a goat, these guys have all the spare beasts, too many to take back home and too many to devour on the spot, so now since one is in the process of confessing sins anyways, might as well have something worth at least a lamb, commit a couple or three, just to entertain the priests. So that creates a market for harlots, and a perfect market it is: guys away from home, nothing but other guys around and they aren't talking, because they are all doing the same thing, looking for something for which to be fondly repentant for the coming new year, and no one recognizes anyone, like seeing your officemate in the porno shop, and what happens at the temple stays at the temple. So that's where we get the term temple harlots. Jesus, what's not to like? Just look at all the jobs created or saved by that system! Only very distantly related but unintentionally hilarious is this commentary in the local "news"paper yesterday: http://www.mercurynews.com/bay-area-news/ci_17017964?source=rss {8^D spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Thu Jan 6 17:48:04 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 6 Jan 2011 09:48:04 -0800 Subject: [ExI] NYT reports criticisms of Precognition article In-Reply-To: <4D25F127.5020802@satx.rr.com> References: <4D25F127.5020802@satx.rr.com> Message-ID: On Thu, Jan 6, 2011 at 8:43 AM, Damien Broderick wrote: > On 1/6/2011 8:43 AM, BillK wrote: >> Quote: >> Many statisticians say that conventional social-science techniques for >> analyzing data make an assumption that is disingenuous and ultimately >> self-deceiving: that researchers know nothing about the probability of >> the so-called null hypothesis. >> >> In this case, the null hypothesis would be that ESP does not exist. >> Refusing to give that hypothesis weight makes no sense, these experts >> say. >> ------------------------------ > > Exactly. Since we know "radio-activity" does not exist, Madam Curie, it > follows that your experiment is meaningless and foolish. Why, if these > magical "rays" were part of the world, they would have been known since > Aristotle; gamblers in casinos would have used them to see through the backs > of their opponents' cards! Yet we know that Aristotle said nothing about > such an absurdity, and casinos thrive. Trust Bayes and your prejudices over > empirical data every time! That's not how I read it. I think they're saying, the analysis fails to evaluate the probability that ESP does not exist, and only evaluates the probability that it does. From sjatkins at mac.com Thu Jan 6 17:57:38 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 06 Jan 2011 09:57:38 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: <20110106100305.GV16518@leitl.org> References: <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> <4D250A60.30505@satx.rr.com> <20110106100305.GV16518@leitl.org> Message-ID: <20820D4C-29DE-40D7-AB3D-B69C6C5100C1@mac.com> On Jan 6, 2011, at 2:03 AM, Eugen Leitl wrote: > On Wed, Jan 05, 2011 at 06:18:40PM -0600, Damien Broderick wrote: >> On 1/5/2011 5:57 PM, Jeff Davis wrote: >> >>> If you could sign up to tele-operate a lunar robot, how much would you >>> pay for the chance to go on that "ride"? >> >> How much would you pay (or how much time expend) to hack into the things >> and crash them into each other. Dodgem Cars! See them flying apart in >> slow motion! > > There's no need to run these on public networks. And of course you > would staff the control centers with trained professionals (some > recruited from the gamer circles, just as today's drone pilots). > > It would be perfectly feasible to make a lunar game world in > SecondLife, OpenSim or OpenCroquet, with the 2.5 s relativistic > latency added via FIFO buffers. You would probably want more realistic physics than any of these offer though. Last time I looked OpenCroquet had less accessible animation and much worse physics than either. But certainly this could be done good enough to be fun to play and with enough realism to not have disastrous things happen at the other end. > > I presume the robots will be pretty small, something which > would fit into a shoebox or maybe as big as a small golf > cart. Wheeled chassis would be quite appropriate for many > locations. For other places one could use something like > a Big Dog (needs lots more power, and the many joints would > be a problem). Yeah, Big Dog runs on an internal combustion engine. Not sure what you would want to replace it with of equivalent power to weight. - samantha From sjatkins at mac.com Thu Jan 6 18:01:56 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 06 Jan 2011 10:01:56 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: <20110106101020.GY16518@leitl.org> References: <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> <007101cbad39$5ede28f0$1c9a7ad0$@att.net> <20110106101020.GY16518@leitl.org> Message-ID: On Jan 6, 2011, at 2:10 AM, Eugen Leitl wrote: > On Wed, Jan 05, 2011 at 04:33:41PM -0800, spike wrote: > >> Yes by all means, snip that. Far too pessimistic methinks. We should be >> able to write a good simulation of builder-bots operating 1.3 seconds away, > > You don't need to write anything, other than adding a delay in the control > flow and the video feed in current virtual world simulators. For a lark, > you could even add 1/6 g to the physics, though that is not strictly > required. I don't believe SL has believable joints and such yet and OpenSim lacks SL in general physics and is ahead in a few specialized areas. Perhaps all good enough. Building a few models and comparing them to rover etc footage or to models in better physics simulations would lay the question to rest. > > It would be possible to build local reflexes with avatar scripting. You mean in animation loops? Scripting is just scripting AFAIK. There is no specific avatar brand of scripting outside of animation loops is SL and OpenSim. - s From stefano.vaj at gmail.com Thu Jan 6 18:30:51 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 6 Jan 2011 19:30:51 +0100 Subject: [ExI] atheists declare religions as scams In-Reply-To: <005401cbac29$a062d7a0$e12886e0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> Message-ID: 2011/1/4 spike : > http://www.christianpost.com/article/20110103/atheists-declare-religions-as-scams-in-new-ad/ > > Opposing the other four is harmless; taking on the one in the back row, far > right can result in murder. Frankly I do not see why exactly one should take on one of the front row. Unless for aspects which are easy to deconstruct as monotheistic influences, most "religions" can well end up being fairly compatible with "atheism". Even when they promote some concept or other of "god", which is not always the case, btw. -- Stefano Vaj From scerir at alice.it Thu Jan 6 19:22:03 2011 From: scerir at alice.it (scerir) Date: Thu, 6 Jan 2011 20:22:03 +0100 Subject: [ExI] stormy weathers? In-Reply-To: References: <20101229093416.GY16518@leitl.org><20101230121927.GL16518@leitl.org><002b01cba83b$1c9d2a70$55d77f50$@att.net><992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com><20101231110008.GD16518@leitl.org><20101231143742.GH16518@leitl.org><3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com><4D250A60.30505@satx.rr.com> Message-ID: <5E7C59EF0E4040D9B473A336EAB382B3@PCserafino> External force behind Swedish bird death http://www.thelocal.se/31278/20110106/ Backbirds tumble from the Arkansas sky shortly before midnight on New Year's Eve http://www.cbsnews.com/stories/2011/01/03/national/main7208349.shtml From spike66 at att.net Thu Jan 6 19:09:15 2011 From: spike66 at att.net (spike) Date: Thu, 6 Jan 2011 11:09:15 -0800 Subject: [ExI] Meat v. Machine In-Reply-To: References: <20101230121927.GL16518@leitl.org> <002b01cba83b$1c9d2a70$55d77f50$@att.net> <992B9886-F3CE-4D36-890F-3E2D5F3FA2BA@mac.com> <20101231110008.GD16518@leitl.org> <20101231143742.GH16518@leitl.org> <3A35A1E6-7688-4945-AA33-F9B64A3E2B44@mac.com> <007101cbad39$5ede28f0$1c9a7ad0$@att.net> <20110106101020.GY16518@leitl.org> Message-ID: <000001cbadd5$36a9ad40$a3fd07c0$@att.net> On Behalf Of Samantha Atkins Subject: Re: [ExI] Meat v. Machine On Jan 6, 2011, at 2:10 AM, Eugen Leitl wrote: > On Wed, Jan 05, 2011 at 04:33:41PM -0800, spike wrote: > >>> Yes by all means, snip that. Far too pessimistic methinks. We should be able to write a good simulation of builder-bots operating >>> 1.3 seconds away, > >> You don't need to write anything, other than adding a delay in the >> control flow and the video feed in current virtual world simulators... It would be possible to build local reflexes with avatar scripting. >You mean in animation loops? Scripting is just scripting AFAIK. There is no specific avatar brand of scripting outside of animation loops is SL and OpenSim. - s If we wanted to go to all the trouble, we could build the actual bots of aluminum and operate them inside an altitude chamber at 0.01 atm. Chambers that run at that pressure are not expensive or difficult to find: there's one sitting idle at China Lake NWC big enough to run our roomba vacuum cleaner sized dump trucks and bulldozers that I envision will be the first serious attempt at a meaningful lunar mission. That NWC chamber is big enough to handle a fighter jet, assuming the wings are folded, so we could create a simulated moon building site in there. We can't sim the .2 G, but we could make a reasonable analog of lunar regolith, temperature and atmosphere. We can even imagine a lunar sandbox on wheels, so that if the admiral wanted us to get our damn toys out of the way to make room to test his latest (useless) ape-hauling fighter jet, we could roll the whole mess out on a day's notice. We could program in 2.6 seconds command feedback delay. That would be a fun project! spike From stathisp at gmail.com Thu Jan 6 22:52:04 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 7 Jan 2011 09:52:04 +1100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: <7E20D3E251624AA391C6232256F0B21E@PCserafino> References: <1976.52184.qm@web114415.mail.gq1.yahoo.com> <4D23652C.7030502@satx.rr.com> <4D249564.2090208@satx.rr.com> <7E20D3E251624AA391C6232256F0B21E@PCserafino> Message-ID: On Thu, Jan 6, 2011 at 10:11 PM, scerir wrote: > Stathis writes: >> >> That is, instead of saying that you survive because a copy lives >> on with your mental qualities you [Damien] prefer to say that since >> you self-evidently survive in ordinary life you can't really be a copy. >> This would not be so problematic if applied consistently, but it is >> not. In the case of destructive teleportation the sequence of >> reasoning is reversed: the self-evident belief is that you are a copy, >> so it follows that the belief that you have survived must be false. > > By "destructive teleportation" do you mean quantum teleportation? > Asking this because, in this case, the quantum state of the original > would be destroyed, but not the "meat" (and his "consciousness"?) > and recreated, at a distance, provided there is enough "meat" there. > s. Either quantum teleportation or classical teleportation with destruction of the original in the scanning process. -- Stathis Papaioannou From stathisp at gmail.com Thu Jan 6 23:33:54 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 7 Jan 2011 10:33:54 +1100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> Message-ID: 2011/1/7 John Clark : > On Jan 4, 2011, at 4:32 PM, spike wrote: > > A christian evolutionist is not necessarily a contradiction > > There are christian evolutionists but it is a contradiction, they just live > with it the same way all religious people deal with the absurdity of their > beliefs, by putting their ideas in little air tight compartments with no way > for them to interact with each other. Christians believe in a benevolent God > who can do anything, so He could have produced the complexity of our world > by just snapping his metaphorical fingers?but instead he used Evolution, a > hideously cruel process. That is FAR more evil than anything Satan did in > the Bible. > ?John K Clark There is no contradiction in accepting a religion but rejecting its ancient beliefs as literal truth. Some theologians take the Bible about as seriously as classical scholars take the Iliad and the Odyssey; that is, they take it very seriously but they don't actually believe that any of the supernatural stuff happened. -- Stathis Papaioannou From stathisp at gmail.com Thu Jan 6 23:48:30 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 7 Jan 2011 10:48:30 +1100 Subject: [ExI] NYT reports criticisms of Precognition article In-Reply-To: References: <4D25F127.5020802@satx.rr.com> Message-ID: On Fri, Jan 7, 2011 at 4:48 AM, Adrian Tymes wrote: > That's not how I read it. ?I think they're saying, the analysis fails > to evaluate the > probability that ESP does not exist, and only evaluates the probability that it > does. Don't the two probabilities necessarily add up to 1, given that either ESP does or does not exist? -- Stathis Papaioannou From scerir at alice.it Thu Jan 6 23:50:54 2011 From: scerir at alice.it (scerir) Date: Fri, 7 Jan 2011 00:50:54 +0100 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <1976.52184.qm@web114415.mail.gq1.yahoo.com><4D23652C.7030502@satx.rr.com><4D249564.2090208@satx.rr.com><7E20D3E251624AA391C6232256F0B21E@PCserafino> Message-ID: >> By "destructive teleportation" do you mean quantum teleportation? >> Asking this because, in this case, the quantum state of the original >> would be destroyed, but not the "meat" (and his "consciousness"?) >> and recreated, at a distance, provided there is enough "meat" there. >> s. > > Either quantum teleportation or classical teleportation with > destruction of the original in the scanning process. > Stathis Papaioannou Classical teleportation would be better. In quantum teleportation, assuming the quantish nature of composite system (body + brain + consciousness) and there is no evidence at present time, Alice should remove decoherence for the duration of the entangling operation, and Bob should reintroduce decoherence so that the teleported composite system becomes a classically functioning system again. All that seems to be hard enough. From spike66 at att.net Fri Jan 7 00:34:02 2011 From: spike66 at att.net (spike) Date: Thu, 6 Jan 2011 16:34:02 -0800 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> Message-ID: <001301cbae02$9530ebd0$bf92c370$@att.net> ... On Behalf Of Stathis Papaioannou ... >...There is no contradiction in accepting a religion but rejecting its ancient beliefs as literal truth. Some theologians take the Bible about as seriously as classical scholars take the Iliad and the Odyssey; that is, they take it very seriously but they don't actually believe that any of the supernatural stuff happened...Stathis Papaioannou Stathis I think this is an understatement. I would say *most* theologians, particularly the fundamentalist variety, discount the supernatural. I might be projecting, but I don't see how they could miss that if they really study the hell out of the bible. Hmmm, study the hell out of the bible, that has a delightful double meaning. spike From alaneugenebrooks52 at yahoo.com Fri Jan 7 01:13:39 2011 From: alaneugenebrooks52 at yahoo.com (Alan Brooks) Date: Thu, 6 Jan 2011 17:13:39 -0800 (PST) Subject: [ExI] A better option than fish oil? In-Reply-To: Message-ID: <986651.68167.qm@web46101.mail.sp1.yahoo.com> BTW, is aspirin still the only 'miracle' drug? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Jan 7 01:21:14 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 6 Jan 2011 18:21:14 -0700 Subject: [ExI] Lunar dirt Message-ID: Will someone enlighten me about what remote manipulators on the moon are going to be doing? You don't have a lot to work with; lunar dirt is about as far from useful objects as I can imagine. I have followed this topic since the mid 1970 and, far as I know, there was never a believable flow chart with rock going in and useful stuff coming out the other. Take solar cells. Anyone have an idea of what sort of plant it takes make silicon? What inputs the plant takes? What has to be frequently replaced? Keith From atymes at gmail.com Fri Jan 7 02:21:39 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 6 Jan 2011 18:21:39 -0800 Subject: [ExI] NYT reports criticisms of Precognition article In-Reply-To: References: <4D25F127.5020802@satx.rr.com> Message-ID: On Jan 6, 2011 3:49 PM, "Stathis Papaioannou" On Fri, Jan 7, 2011 at 4:48 AM, Adrian Tymes > That's not how I read it. ?I think they're saying, the analysis fails > > to evaluate the > > probability that ESP does not exist, and only evaluates the probability that it > > does. > > Don't the two probabilities necessarily add up to 1, given that either > ESP does or does not exist? 1, or 100%, or X to Y odds.? Without going into too much detail, the problem is that they present the X but not the Y, and they don't give an absolute ranking (like .4 or 40%) from which the rest could be deduced. From fauxever at sprynet.com Fri Jan 7 02:25:13 2011 From: fauxever at sprynet.com (Olga Bourlin) Date: Thu, 6 Jan 2011 18:25:13 -0800 Subject: [ExI] simulation as an improvement over reality. In-Reply-To: References: <1976.52184.qm@web114415.mail.gq1.yahoo.com><4D23652C.7030502@satx.rr.com><4D249564.2090208@satx.rr.com><7E20D3E251624AA391C6232256F0B21E@PCserafino> Message-ID: <7BAFD2CAFF2340BFB2DB71B31D0DB1D6@Brainiac> Hilarious! Thanks ... :) -------------------------------------------------- From: "Stathis Papaioannou" Sent: Thursday, January 06, 2011 2:52 PM To: "ExI chat list" Subject: Re: [ExI] simulation as an improvement over reality. > On Thu, Jan 6, 2011 at 10:11 PM, scerir wrote: >> Stathis writes: >>> >>> That is, instead of saying that you survive because a copy lives >>> on with your mental qualities you [Damien] prefer to say that since >>> you self-evidently survive in ordinary life you can't really be a copy. >>> This would not be so problematic if applied consistently, but it is >>> not. In the case of destructive teleportation the sequence of >>> reasoning is reversed: the self-evident belief is that you are a copy, >>> so it follows that the belief that you have survived must be false. >> >> By "destructive teleportation" do you mean quantum teleportation? >> Asking this because, in this case, the quantum state of the original >> would be destroyed, but not the "meat" (and his "consciousness"?) >> and recreated, at a distance, provided there is enough "meat" there. >> s. > > Either quantum teleportation or classical teleportation with > destruction of the original in the scanning process. > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From atymes at gmail.com Fri Jan 7 03:01:59 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 6 Jan 2011 19:01:59 -0800 Subject: [ExI] Lunar dirt In-Reply-To: References: Message-ID: On Thu, Jan 6, 2011 at 5:21 PM, Keith Henson wrote: > Will someone enlighten me about what remote manipulators on the moon > are going to be doing? > > You don't have a lot to work with; lunar dirt is about as far from > useful objects as I can imagine. > > I have followed this topic since the mid 1970 and, far as I know, > there was never a believable flow chart with rock going in and useful > stuff coming out the other. > > Take solar cells. ?Anyone have an idea of what sort of plant it takes > make silicon? ?What inputs the plant takes? ?What has to be frequently > replaced? Materials processing is a bit of a science in its own, but fortunately, it is well enough established in this regard that the details rarely need to be looked at. (Still, it's a good question to ask, to make sure that there is a solution here.) For one example, take olivine, a material native on the moon. From http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B984K-4W0SFYG-SF&_user=10&_coverDate=02%2F28%2F2009&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1598942661&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=7530ee67e2981bcaba741394c3faeb26&searchtype=a we see that water and carbon dioxide can catalyze its breakdown into silicon dioxide (silica) and magnesium oxide (a waste product as far as this example process is concerned, but with potential side uses). Silicon dioxide can be reacted with carbon - in coal or charcoal form - and heat like so: SiO2 + C -> Si + CO2 This provides the carbon dioxide for the previous step. Water, in the form of ice, has been found on the moon - though it may be precious, it is usable for this. Carbon dioxide, further water, and sunlight can be reacted in plants to split the carbon from the oxygen. Excess plant material can be burned (preferably in a low-oxygen process) to make more charcoal. There are probably better ways to do it, and certainly olivine is not the only type of rock that can be processed, but this gives you one example of inputs and outputs. Energy is, of course, consumed at various stages of this process. A power source will be needed at first, but possibly not a very long lived one, if this process can make and place (with those manipulators) solar panels to power itself further. (It is likely that such a factory would be its own first customer, as a practical matter.) From spike66 at att.net Fri Jan 7 05:15:54 2011 From: spike66 at att.net (spike) Date: Thu, 6 Jan 2011 21:15:54 -0800 Subject: [ExI] Lunar dirt In-Reply-To: References: Message-ID: <001201cbae29$f629ae50$e27d0af0$@att.net> On Behalf Of Keith Henson Subject: [ExI] Lunar dirt >...Will someone enlighten me about what remote manipulators on the moon are going to be doing? Keith Keith you ask a good question. Building small machinery capable of making solar cells is one hell of a difficult task. The machine itself would be built here and soft-landed there, where another machine would supply ground up lunar soil and energy. We may not get there any time really soon. I don't know enough about the manufacture of solar cells to be of much help there. What I actually had in mind is using lunar machinery to create tunnels, and then create tubes in the tunnels so they could hold water and oxygen liberated from the regolith. Then I can imagine devices that mine water ice and carry it to the tunnel. But without a looootta lotta solar cells, it would all be a pointless exercise. The end game would be to create a rail launcher so that we could hurl lunar soil into lunar orbit, and make stuff from it there. This task is harder than any space engineering feat I have seen to date. spike From sjatkins at mac.com Fri Jan 7 05:30:03 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 06 Jan 2011 21:30:03 -0800 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> Message-ID: On Jan 6, 2011, at 7:34 AM, John Clark wrote: > On Jan 4, 2011, at 4:32 PM, spike wrote: > >> A christian evolutionist is not necessarily a contradiction > > > There are christian evolutionists but it is a contradiction, they just live with it the same way all religious people deal with the absurdity of their beliefs, by putting their ideas in little air tight compartments with no way for them to interact with each other. Christians believe in a benevolent God who can do anything, so He could have produced the complexity of our world by just snapping his metaphorical fingers but instead he used Evolution, a hideously cruel process. That is FAR more evil than anything Satan did in the Bible. There are many many flavors of Christian - some are actually pretty sane. :) Mainstream Christianity for a while was denatured of all that biblical literalism and thought many things were mostly symbolic, for instance. Then we got the fundamentalist resurgence - in my opinion partially as reaction to Future Shock. If you hold that it is possible within the laws of physics of this universe to run multiple virtual universes then you believe in principle that it is possible for an entire universe to have a creator. You also believe in principle that things that apparently violate the physics of such a virtual universe can be done by those controlling the simulating machinery. And of course you believe that genetic algorithms could be used to develop beings of all kinds tuned to parts of this virtual universe. So yeah, a Christian that understands and affirms evolution is not that far fetched. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sondre-list at bjellas.com Fri Jan 7 08:19:47 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 7 Jan 2011 09:19:47 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> Message-ID: No religion are sane. Religions are invalid as basis for morality, as the morality in all religions are not based upon realities in the world and doesn't stand up to scientific scrutiny. Religions is the mechanism of which to get people to submit their lives to rude and evil beings, of which then the leaders of each religion can manipulate their masses at will. Yet you are right, it's not far fetched that religious individuals can believe anything, even in evolution. Believing has nothing to do with understandability, truth and right. :-) - Sondre 2011/1/7 Samantha Atkins > > > > There are many many flavors of Christian - some are actually pretty sane. > :) Mainstream Christianity for a while was denatured of all that biblical > literalism and thought many things were mostly symbolic, for instance. Then > we got the fundamentalist resurgence - in my opinion partially as reaction > to Future Shock. > > If you hold that it is possible within the laws of physics of this universe > to run multiple virtual universes then you believe in principle that it is > possible for an entire universe to have a creator. You also believe in > principle that things that apparently violate the physics of such a virtual > universe can be done by those controlling the simulating machinery. And of > course you believe that genetic algorithms could be used to develop beings > of all kinds tuned to parts of this virtual universe. So yeah, a > Christian that understands and affirms evolution is not that far fetched. > > - samantha > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Jan 7 09:22:05 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 7 Jan 2011 10:22:05 +0100 Subject: [ExI] Lunar dirt In-Reply-To: References: Message-ID: <20110107092205.GI16518@leitl.org> On Thu, Jan 06, 2011 at 06:21:14PM -0700, Keith Henson wrote: > Will someone enlighten me about what remote manipulators on the moon > are going to be doing? The first task would be exploration and mapping of the south and north poles. The second part would be building large scale thin-film PV arrays to mine volatiles and to build more thin-film panels, and then to expand the industry base until you can build linear motor launchers. > You don't have a lot to work with; lunar dirt is about as far from > useful objects as I can imagine. http://en.wikipedia.org/wiki/In-situ_resource_utilization ... It has long been suggested that solar cells could be produced from the materials present on the lunar surface. In its original form, known as the solar power satellite, the proposal was intended as an alternate power source for Earth. Solar cells would be shipped to Earth Orbit and assembled, the power being transmitted to Earth via microwave beams.[2] Despite much work on the cost of such a venture, the uncertainty lay in the cost and complexity of fabrication procedures on the lunar surface. A more modest reincarnation of this dream is for it to create solar cells to power future lunar bases. One particular proposal is to simplify the process by using Fluorine brought from Earth as potassium fluoride to separate the raw materials from the lunar rocks.[3] ... On the moon, the lunar highland material anorthite is similar to the earth mineral bauxite, which is an aluminium ore. Smelters can produce pure aluminum, calcium metal, oxygen and silica glass from anorthite. Raw anorthite is also good for making fiberglass and other glass and ceramic products.[11] Over twenty different methods have been proposed for oxygen extraction on the moon.[4] Oxygen is often found in iron rich lunar minerals and glasses as iron oxide. The oxygen can be extracted by heating the material to temperatures above 900 ?C and exposing it to hydrogen gas. The basic equation is: FeO + H2 ? Fe + H2O. This process has recently been made much more practical by the discovery of significant amounts of hydrogen-containing regolith near the moon's poles by the Clementine spacecraft.[12] Lunar materials may also be valuable for other uses. It has also been proposed to use lunar regolith as a general construction material,[13] through processing techniques such as sintering, hot-pressing, liquification, and the cast basalt method. Cast basalt is used on Earth for construction of, for example, pipes where a high resistance to abrasion is required. Cast basalt has a very high hardness of 8 Mohs (diamond is 10 Mohs) but is also susceptible to mechanical impact and thermal shock[14] which could be a problem on the moon. Glass and glass fibre are straightforward to process on the moon and Mars, and it has been argued that the glass is optically superior to that made on the Earth because it can be made anhydrous.[11] Successful tests have been performed on earth using two lunar regolith simulants MLS-1 and MLS-2.[15] Basalt fibre has also been made from lunar regolith simulators. In August 2005, NASA contracted for the production of 16 metric tons of simulated lunar soil, or "Lunar Regolith Simulant Material."[16] This material, called JSC-1a, is now commercially available for research on how lunar soil could be utilized in-situ.[17] ... > I have followed this topic since the mid 1970 and, far as I know, > there was never a believable flow chart with rock going in and useful > stuff coming out the other. Keith, I thought your knowledge of chemistry and geology was better than this. The Moon has the added advantage of free UHV, which would be expensive on Earth. Many processes can run in dry UHV, and you will use gases and liquids only where necessary. The important thing is that there's plenty of volatiles. That changes the game. It's much easier now. Add close to 24/7/365 insolation (eventually, you will build rings around the poles, and this is excellent place for an industry. About the only pollution problem is fouling up your vacuum. > Take solar cells. Anyone have an idea of what sort of plant it takes I have a very good idea of that, yes. I've even toured the facilities. > make silicon? What inputs the plant takes? What has to be frequently > replaced? Current CdTe takes about 10 g/m^2. That's 100 m^2/kg. 10^5 m^2/ton. At 10% and 1.3 kW/m^2, that's 13 kW/kg, 1.3 MW/100 kg, or 13 MW/ton. Assuming you deliver 100 kg packages, each will be good for over a MW of power. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sondre-list at bjellas.com Fri Jan 7 09:55:12 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 7 Jan 2011 10:55:12 +0100 Subject: [ExI] A better option than fish oil? In-Reply-To: References: Message-ID: The ocean ain't exactly a pure ("clean") substance, a better alternative is plant-based oils. Though a sales guy would say anything to sell his products, no matter what real effect or fact or truth their products have. - Sondre On Thu, Jan 6, 2011 at 8:39 AM, John Grigg wrote: > A better means of getting Omega-3's? > > http://krilloil.mercola.com/krill-oil.html > > John > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Sondre Bjell?s | Senior Solutions Architect | Steria http://www.sondreb.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Fri Jan 7 12:39:13 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 07 Jan 2011 13:39:13 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> Message-ID: <4D270971.3060505@aleph.se> On 2011-01-07 09:19, Sondre Bjell?s wrote: > No religion are sane. Religions are invalid as basis for morality, as > the morality in all religions are not based upon realities in the world > and doesn't stand up to scientific scrutiny. While the epistemic basis for religions is clearly bad, I doubt there is much science itself can say about the correctness of morality. If you are a moral realist (moral claims can be true or false), it is not obvious that the truth of moral statements can be investigated through a scientific experiment. How do you measure the appropriateness of an action? How do you test if utilitarianism is correct? And if you are a moral noncognitivist (moral claims are not true or false, but like attitudes or emotions) or error theorist (moral claims are erroneous like religion) at most you can collect statistics and correlates of why people believe certain things. If you are a subjectivist (moral claims are about subjective human mental states; they may or may not be relative to the speaker or their culture) you might be able to investigate them somewhat, with the usual messiness of soft science. Note that logic and philosophy can say a lot about the consistency of moral systems: it is pretty easy to show how many moral systems are self-contradictory or produce outcomes their proponents don't want, and it is sometimes even possible to prove more general theorems that show that certain approaches are in trouble (e.g. see http://sciencethatmatters.com/archives/38 ) Philosophy has been doing this for ages, to the minor annoyance of believers. Science is really good at undermining factually wrong claims (like the Earth being flat or that prayer has measurable positive effects on the weather). It might also be possible to use it to say things about properties of moral systems such as their computational complexity, evolutionary stability or how they tie in with the cognitive neuroscience and society of their believers. It is just that science is pretty bad at proving anything about the *correctness* of moral statements unless it is supplemented by a theory of what counts as correct, and that tends to come from the philosophy department (or, worse, the theology department...) This was a PSA brought to you by the philosophy department. Better living through thinking. -- Anders Sandberg Future of Humanity Institute Oxford University From sondre-list at bjellas.com Fri Jan 7 14:30:47 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 7 Jan 2011 15:30:47 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <4D270971.3060505@aleph.se> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: Good feedback Anders and thanks! What I was referring too was not to apply science, but the tools of science (scientific method) as a means to verify the validity of morality. Those tools are amongst others empirical evidence and logic. There are no other rational means to understand what is true. Many people do not have a rational basis for their moral beliefs, and their moral easily crumbles under any philosophical and logical investigation. - Sondre On Fri, Jan 7, 2011 at 1:39 PM, Anders Sandberg wrote: > On 2011-01-07 09:19, Sondre Bjell?s wrote: > >> No religion are sane. Religions are invalid as basis for morality, as >> the morality in all religions are not based upon realities in the world >> and doesn't stand up to scientific scrutiny. >> > > While the epistemic basis for religions is clearly bad, I doubt there is > much science itself can say about the correctness of morality. > > If you are a moral realist (moral claims can be true or false), it is not > obvious that the truth of moral statements can be investigated through a > scientific experiment. How do you measure the appropriateness of an action? > How do you test if utilitarianism is correct? And if you are a moral > noncognitivist (moral claims are not true or false, but like attitudes or > emotions) or error theorist (moral claims are erroneous like religion) at > most you can collect statistics and correlates of why people believe certain > things. If you are a subjectivist (moral claims are about subjective human > mental states; they may or may not be relative to the speaker or their > culture) you might be able to investigate them somewhat, with the usual > messiness of soft science. > > Note that logic and philosophy can say a lot about the consistency of moral > systems: it is pretty easy to show how many moral systems are > self-contradictory or produce outcomes their proponents don't want, and it > is sometimes even possible to prove more general theorems that show that > certain approaches are in trouble (e.g. see > http://sciencethatmatters.com/archives/38 ) Philosophy has been doing this > for ages, to the minor annoyance of believers. > > Science is really good at undermining factually wrong claims (like the > Earth being flat or that prayer has measurable positive effects on the > weather). It might also be possible to use it to say things about properties > of moral systems such as their computational complexity, evolutionary > stability or how they tie in with the cognitive neuroscience and society of > their believers. It is just that science is pretty bad at proving anything > about the *correctness* of moral statements unless it is supplemented by a > theory of what counts as correct, and that tends to come from the philosophy > department (or, worse, the theology department...) > > > This was a PSA brought to you by the philosophy department. Better living > through thinking. > > -- > Anders Sandberg > Future of Humanity Institute > Oxford University > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Sondre Bjell?s | Senior Solutions Architect | Steria http://www.sondreb.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Fri Jan 7 15:11:04 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 7 Jan 2011 09:11:04 -0600 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net><007901cbac3d$bcf6e230$36e4a690$@att.net><001201cbac56$ea0ce670$be26b350$@att.net><89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> Message-ID: <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> Sondre wrote: " No religion are sane. Religions are invalid as basis for morality, as the morality in all religions are not based upon realities in the world and doesn't stand up to scientific scrutiny. " I disagree. Some religions offer a cultural perspective of a tribe's history and mythic lore. And many contain moral views, which are deeply based on the real world. It would be difficult to say whether they stand up to scientific scrutiny, but it would be difficult to say that transhumanism stands up to scientific scrutiny, or cyberculture, or feminism, etc. " Religions is the mechanism of which to get people to submit their lives to rude and evil beings, of which then the leaders of each religion can manipulate their masses at will. " All religions? All leaders of religions? All masses? " Yet you are right, it's not far fetched that religious individuals can believe anything, even in evolution. Believing has nothing to do with understandability, truth and right. " Believing does involve understandability, truth and right. Many religions offer a elements of good judgment, however based on myth. Natasha 2011/1/7 Samantha Atkins There are many many flavors of Christian - some are actually pretty sane. :) Mainstream Christianity for a while was denatured of all that biblical literalism and thought many things were mostly symbolic, for instance. Then we got the fundamentalist resurgence - in my opinion partially as reaction to Future Shock. If you hold that it is possible within the laws of physics of this universe to run multiple virtual universes then you believe in principle that it is possible for an entire universe to have a creator. You also believe in principle that things that apparently violate the physics of such a virtual universe can be done by those controlling the simulating machinery. And of course you believe that genetic algorithms could be used to develop beings of all kinds tuned to parts of this virtual universe. So yeah, a Christian that understands and affirms evolution is not that far fetched. - samantha _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Fri Jan 7 16:15:11 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 07 Jan 2011 17:15:11 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: <4D273C0F.8040809@aleph.se> On 2011-01-07 15:30, Sondre Bjell?s wrote: > Good feedback Anders and thanks! Thanks! > What I was referring too was not to apply science, but the tools of > science (scientific method) as a means to verify the validity of > morality. Those tools are amongst others empirical evidence and logic. > There are no other rational means to understand what is true. As far as we know. Consider that logic was invented around ~2400 years ago and the scientific method was invented ~400 years ago (and both have evolved *a lot* over just the last century) compared to the more than 10,000 years of time humanity have had complex societies able to afford some systematic truth-seeking. We haven't had much time to search for other, equally good, methods of improving our true knowledge. There might be really good methods we haven't discovered yet because it is so hard to do it. (Of course, there are also a whole bunch of philosophers of science who have trouble with what the concepts of truth and evidence are supposed to denote - it is a tricky business when you look at it closely. Not all of it is merely semantics either, as AI researchers have discovered the hard way.) > Many > people do not have a rational basis for their moral beliefs, and their > moral easily crumbles under any philosophical and logical investigation. With a sufficiently good philosopher, anybody's belief system will tend to crumble. :-) The fact that we manage to get around in the real world despite the amazing crappiness of our knowledge, beliefs and thought processes is very interesting in itself. What does that tell us about ourselves, the world and the feasibility of making thinking systems? -- Anders Sandberg Future of Humanity Institute Oxford University From jonkc at bellsouth.net Fri Jan 7 16:11:22 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 7 Jan 2011 11:11:22 -0500 Subject: [ExI] Morality (was: atheists declare religions as scams) In-Reply-To: <4D270971.3060505@aleph.se> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: On Jan 7, 2011, at 7:39 AM, Anders Sandberg wrote: > While the epistemic basis for religions is clearly bad, I doubt there is much science itself can say about the correctness of morality. Yes, but there isn't much religion can say about morality either, except that it's bad because God says it's bad; and if that is the basis of morality then it makes the statement "God is good" circular and vacuous. > it is pretty easy to show how many moral systems are self-contradictory I'd say that no moral system is entirely free from self contradiction. You probably already know about the moral thought experiments devised by Judith Jarvis Thomson: 1) A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately you could flip a switch, which will lead the trolley down a different track saving the lives of the five. Unfortunately there is a single person tied to that track. Should you flip the switch and kill one man or do nothing and just watch five people die ? 2) As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track killing him to save five people. Should you push the fat man over the edge or do nothing? Almost everybody feels in their gut that the second scenario is much more questionable morally than the first, I do too, and yet really it's the same thing and the outcome is identical. The feeling that the second scenario is more evil than the first seems to hold true across all cultures; they even made slight variations of it involving canoes and crocodiles for south american indians in Amazonia and they felt that #2 was more evil too. So there most be some code of behavior built into our DNA and it really shouldn't be a surprise that it's not 100% consistent; Evolution would have gained little survival value perfecting it to that extent, it works good enough at producing group cohesion as it is. On Jan 6, 2011, at 6:33 PM, Stathis Papaioannou wrote: > > There is no contradiction in accepting a religion but rejecting its > ancient beliefs as literal truth. Some theologians take the Bible > about as seriously as classical scholars take the Iliad and the > Odyssey; that is, they take it very seriously but they don't actually > believe that any of the supernatural stuff happened. True, but it seems to me that the minimum requirement for calling oneself religious is a belief in God, and if there is anybody who calls himself religious who doesn't think that God is benevolent I have yet to meet him. And that I maintain is inconsistent with Evolution, which can produce grand and beautiful things but only after eons of monstrous cruelty. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 7 16:16:26 2011 From: spike66 at att.net (spike) Date: Fri, 7 Jan 2011 08:16:26 -0800 Subject: [ExI] worlds fastest helicopter Message-ID: <004401cbae86$3c003b90$b400b2b0$@att.net> Last month, Sikorsky tested its AX coaxial-rotor helicopter and achieved a speed of 260 knots. Tthat makes this particular helicopter the fastest on the planet by about 100 of knots. The Sikorsky lads have really accomplished something here. Cool video too: http://devour.com/video/worlds-fastest-helicopter/ spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Fri Jan 7 17:33:00 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 07 Jan 2011 18:33:00 +0100 Subject: [ExI] Morality In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: <4D274E4C.7010106@aleph.se> On 2011-01-07 17:11, John Clark wrote: > On Jan 7, 2011, at 7:39 AM, Anders Sandberg wrote: > >> While the epistemic basis for religions is clearly bad, I doubt there >> is much science itself can say about the correctness of morality. > > Yes, but there isn't much religion can say about morality either, except > that it's bad because God says it's bad; and if that is the basis of > morality then it makes the statement "God is good" circular and vacuous. And if God wants you to do something because it is good to do, then you should do it anyway since it is good to do - no need to invoke God to motivate it. It is an old classic, commonly ascribed to Socrates. Annoys believers nicely. >> it is pretty easy to show how many moral systems are self-contradictory > > I'd say that no moral system is entirely free from self contradiction. > You probably already know about the moral thought experiments devised by > Judith Jarvis Thomson: I have experienced more trolley problems than you can imagine :-) Variants are about as common in ethics as E coli bacteria are in biology. I even got the chance to name the "Snowy San Fransico trolley problem" devised by a colleague (in which the fat man is sliding on a sled downhill to potentially block the trolley, but if he had not hit the trolley he would (hypothetically) slid further and hit a lever that would have stopped the trolley - it all made sense in the right context, or so I was told :-) ). > Almost everybody feels in their gut that the second scenario is much > more questionable morally than the first, I do too, and yet really it's > the same thing and the outcome is identical. The feeling that the second > scenario is more evil than the first seems to hold true across all > cultures; they even made slight variations of it involving canoes and > crocodiles for south american indians in Amazonia and they felt that #2 > was more evil too. So there most be some code of behavior built into our > DNA and it really shouldn't be a surprise that it's not 100% consistent; > Evolution would have gained little survival value perfecting it to that > extent, it works good enough at producing group cohesion as it is. But wait a bit. That everybody feels a certain way doesn't mean it is true. I can easily device some food that everybody would find utterly disgusting, yet was harmless and very nutritional. If our reactions are evolved, that doesn't mean they are correct - we have evolved a lot of things that are not optimal. Exactly what this trolley problem situation tells us about ethics is very debated. Some people think it does show that folk ethics is wrong, and people should use properly designed ethical systems. Others think it indicates that academic ethics doesn't "get it" (my favorite example is how engineering students refuse to play the game, and instead devise ways of stopping the trolley). My own take is that ethicists should pay more attention to moral cognition - which is a messy area where evolutionary psychology, cognitive neuroscience, social and cultural factors and heavens know what else interact to produce our moral thinking. The dilemmas so beloved in academia are rarely the big moral problems in real life: there we usually know what is right, it is just that we don't do it. Fixing acrasia through a drug might do more for improving our species than any amount of philosophizing... maybe. -- Anders Sandberg Future of Humanity Institute Oxford University From sondre-list at bjellas.com Fri Jan 7 17:51:41 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 7 Jan 2011 18:51:41 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> Message-ID: Thanks for the reply Natasha, I'm going to try and answer your questions. I'm not saying that understanding religion is not a valuable thing for our society. It will help us understand the absurdity which our ancestral parents cooked up with their imaginations. There are of course many good moral rules in the religions, quite obviously! We know there is a preferred behavior of which we should try to understand and follow, and morality is the concept which describes this preferred behavior. If no preferred behavior existed, we probably wouldn't exists and sit here discussing this topic, as we wouldn't have survived in the natural world. Let's find a *true moral value*: Here are two moral sentences from the Christian bible: "You shall not kill/murder." "Neither shall you steal." There is both rational and logical reasons why most religions have these are moral ideas, they originated as moral values which helps the society. Human progress and prosperity is dependent on true moral values. They can more clearly be described with a more modern understanding of this moral concept: "You should not initiate physical force against another person", according to the non-aggression-principle ( http://en.wikipedia.org/wiki/Non-aggression_principle). The above moral value stands up to logical and empirical examination: If everyone murdered each other, we would all die. Nobody likes to be bullied with, so we shouldn't physically attack each other. Nobody likes when others steal from them, so we should avoid such activities. These are truths that exists both for me personally, and it's the reality that I'm evidently seeing in my society and amongst my friends. Neither of which I steal from, and neither of them are stealing from me. I've never threatened any of my friends physically, and neither have them towards me. Does that mean nobody ever attacks another person? No, the world is not perfect and neither is biology. If I tell you that cats have one head and two ears - you could verify that through empirical examination. Yet, there are cats born with two heads and three or more ears. We are not above the physical laws and the biological processes of which our bodies is manifested with. So that's how we establish moral values which are true and correct. Let's move on to a *false moral value*: "thou shalt not cast thy seed upon the ground" This is regarding masturbation, of which people a long, long time ago looked upon as something nasty, evil and destructive to a persons life. Let's examine the evidence: I have masturbated since I was a kid, and I enjoyed it then and I enjoy it now. >From what I can read on the web, other people masturbate and they enjoy it as well. My penis have never fallen off due to this activity. When I don't masturbate, my body will eventually ejaculate the seamen while I'm sleeping. So this is evidently a false and incorrect moral value. Religions, especially those based upon Abraham, have a lot of false moral values, which are bad. Let's not discuss how many people have been killed over all those false moral values that are preached by religion. Consistency is very important in morality, and the in-consistency in the morals preached by religions are many. That's how they have justified killing millions, even though it clearly says: You shall not kill and neither steal. One example of in-consistency from the Qur'an: "*I will cast terror into the hearts of those who disbelieve. Therefore strike off their heads and strike off every fingertip of them*" So correct and true moral values has apply to all human beings, there cannot be any values which applies only to rich people, poor people, Iraqis, Jews, Christians, Africans, police officers or soldiers. If it is wrong for someone to shoot and kill another person in USA, then it's wrong to shoot and kill another person if you are a soldier in Afghanistan. Religion has plenty of good moral values, they where created in ancient societies and helped us live better together. Religion is founded on the utter submission to one or more deities and throwing a personal rationality out the window. Religion is about faith, founded on irrational thoughts and ideas. You can't prove a religious faith, yet I consider there to be a lot of evidence in the world which tells us where religion came from and how it came about, which for me are clear indications against religion. On Transhumanism I think outsiders sometimes considers it/us as religious people, that we replace Transhumanism with other religions. I clearly don't consider it in such a manner, self-improvement is a great moral value. It will give me more energy, it involves improving my own knowledge, and give myself better health. All of which my family and friends have great benefits from. I hope I made myself understood, I'm not the worlds greatest debater and English is not my native language. Thanks for your attention and time! - Sondre 2011/1/7 Natasha Vita-More > Sondre wrote: > > " No religion are sane. Religions are invalid as basis for morality, as > the morality in all religions are not based upon realities in the world and > doesn't stand up to scientific scrutiny. " > > I disagree. Some religions offer a cultural perspective of a tribe's > history and mythic lore. And many contain moral views, which are deeply > based on the real world. It would be difficult to say whether they stand up > to scientific scrutiny, but it would be difficult to say that transhumanism > stands up to scientific scrutiny, or cyberculture, or feminism, etc. > > " Religions is the mechanism of which to get people to submit their lives > to rude and evil beings, of which then the leaders of each religion can > manipulate their masses at will. " > > All religions? All leaders of religions? All masses? > > " Yet you are right, it's not far fetched that religious individuals can > believe anything, even in evolution. Believing has nothing to do > with understandability, truth and right. " > > Believing does involve understandability, truth and right. > > Many religions offer a elements of good judgment, however based on myth. > > Natasha > > 2011/1/7 Samantha Atkins > >> >> >> >> There are many many flavors of Christian - some are actually pretty sane. >> :) Mainstream Christianity for a while was denatured of all that biblical >> literalism and thought many things were mostly symbolic, for instance. Then >> we got the fundamentalist resurgence - in my opinion partially as reaction >> to Future Shock. >> >> If you hold that it is possible within the laws of physics of this >> universe to run multiple virtual universes then you believe in principle >> that it is possible for an entire universe to have a creator. You also >> believe in principle that things that apparently violate the physics of such >> a virtual universe can be done by those controlling the simulating >> machinery. And of course you believe that genetic algorithms could be used >> to develop beings of all kinds tuned to parts of this virtual universe. >> So yeah, a Christian that understands and affirms evolution is not that >> far fetched. >> >> - samantha >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Sondre Bjell?s | Senior Solutions Architect | Steria http://www.sondreb.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sondre-list at bjellas.com Fri Jan 7 18:29:44 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 7 Jan 2011 19:29:44 +0100 Subject: [ExI] Morality (was: atheists declare religions as scams) In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: I don't see the problem with this moral example, and of course is #2 the worse one. In the first one, you are not inflicting death upon the single individual. In the second example, you are initiating physical force towards another human being, which is incredible bad and immoral. The moral thing to do would allow the 5 people to die, while myself and the fatty survives. Last alternative would be to sacrifice myself, which I would do for some special people. I have a lot more problem with the first example, not the second one as it seems most other people have. Saving other humans is a good thing to do, and clearly a thing we can empirically prove is good. Having one person die is better than having two people die. Yet, some people seem to draw a conclusion from this good deed, that it's good to save other humans, somehow is above the existing moral values which we have identified to be true and correct, such as the non-aggression-principle. Moral values have to hold true to all contexts and not contradict each other, or else they are not true nor correct, as all humans are of equal biological bodies, so much our moral values apply in the same way. Example: Is it morally right to use physical force towards other human beings if that will save some other human beings? The moral truth of "you shall not initiate physical force" tells us that NO, we should not morally accept the killing of another human being. Not for two people, not for 5 people, not for thousand people and not even for a million people. Most people doesn't have a moral in this question that are true and correct, they have a moral value based upon their "feeling". Their "feeling" might be 5 people, 10 people or a million people. That's why USA could invade Iraq and Afghanistan, people think they are morally right when they can save someone, by killing others. They (the US government) tried to apply reasons for going to war, which most people emphasized with, such as: "Saddam Hussein is an evil dictator and the Iraqis should be set free" and "It's self-defense and retribution for the 9/11 killings". Most people have not yet woken up from the primordial soup of which most of our current moral values stem from... Changing ones moral values is very expensive for any individual, which is why most people won't change them once they have decided. Please excuse me if I didn't make myself understandable... :-) - Sondre 2011/1/7 John Clark > On Jan 7, 2011, at 7:39 AM, Anders Sandberg wrote: > > While the epistemic basis for religions is clearly bad, I doubt there is > much science itself can say about the correctness of morality. > > > Yes, but there isn't much religion can say about morality either, except > that it's bad because God says it's bad; and if that is the basis of > morality then it makes the statement "God is good" circular and vacuous. > > it is pretty easy to show how many moral systems are self-contradictory > > > I'd say that no moral system is entirely free from self contradiction. You > probably already know about the moral thought experiments devised by Judith > Jarvis Thomson: > > 1) A trolley is running out of control down a track. In its path are five > people who have been tied to the track by a mad philosopher. Fortunately you > could flip a switch, which will lead the trolley down a different track > saving the lives of the five. Unfortunately there is a single person tied to > that track. Should you flip the switch and kill one man or do nothing and > just watch five people die ? > > 2) As before, a trolley is hurtling down a track towards five people. You > are on a bridge under which it will pass, and you can stop it by dropping a > heavy weight in front of it. As it happens, there is a very fat man next to > you - your only way to stop the trolley is to push him over the bridge and > onto the track killing him to save five people. Should you push the fat man > over the edge or do nothing? > > Almost everybody feels in their gut that the second scenario is much more > questionable morally than the first, I do too, and yet really it's the same > thing and the outcome is identical. The feeling that the second scenario is > more evil than the first seems to hold true across all cultures; they even > made slight variations of it involving canoes and crocodiles for south > american indians in Amazonia and they felt that #2 was more evil too. So > there most be some code of behavior built into our DNA and it really > shouldn't be a surprise that it's not 100% consistent; Evolution would have > gained little survival value perfecting it to that extent, it works good > enough at producing group cohesion as it is. > > On Jan 6, 2011, at 6:33 PM, Stathis Papaioannou wrote: > > > There is no contradiction in accepting a religion but rejecting its > ancient beliefs as literal truth. Some theologians take the Bible > about as seriously as classical scholars take the Iliad and the > Odyssey; that is, they take it very seriously but they don't actually > believe that any of the supernatural stuff happened. > > > True, but it seems to me that the minimum requirement for calling oneself > religious is a belief in God, and if there is anybody who calls himself > religious who doesn't think that God is benevolent I have yet to meet him. > And that I maintain is inconsistent with Evolution, which can produce grand > and beautiful things but only after eons of monstrous cruelty. > > John K Clark > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Sondre Bjell?s | Senior Solutions Architect | Steria http://www.sondreb.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Jan 7 18:37:47 2011 From: pharos at gmail.com (BillK) Date: Fri, 7 Jan 2011 18:37:47 +0000 Subject: [ExI] Morality In-Reply-To: <4D274E4C.7010106@aleph.se> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> <4D274E4C.7010106@aleph.se> Message-ID: On Fri, Jan 7, 2011 at 5:33 PM, Anders Sandberg wrote: > Exactly what this trolley problem situation tells us about ethics is very > debated. Some people think it does show that folk ethics is wrong, and > people should use properly designed ethical systems. Others think it > indicates that academic ethics doesn't "get it" (my favorite example is how > engineering students refuse to play the game, and instead devise ways of > stopping the trolley). > > My own take is that ethicists should pay more attention to moral cognition - > which is a messy area where evolutionary psychology, cognitive neuroscience, > social and cultural factors and heavens know what else interact to produce > our moral thinking. The dilemmas so beloved in academia are rarely the big > moral problems in real life: there we usually know what is right, it is just > that we don't do it. Fixing acrasia through a drug might do more for > improving our species than any amount of philosophizing... maybe. > > This response is probably much too simplistic for a professional philosopher, :) but it probably expresses a fairly universal view. I think what unsettles humanity about pushing the fat man in front of the trolley is the virtually universal rule 'Thou shalt not kill' (without a very very good reason). And also the Golden Rule -- nobody wants to actually be the fat man in question. Once the door is opened to intentionally kill one (or many) for the greater good, then this will almost certainly be misused by those in power to justify killing those they disapprove of. So it is safer to forbid it from the beginning, rather than getting into endless arguments about when it might be justified. Do we nuke Iran and kill a 100,000 to stop a war that would kill many millions? Just say No. BillK From js_exi at gnolls.org Fri Jan 7 19:01:00 2011 From: js_exi at gnolls.org (J. Stanton) Date: Fri, 07 Jan 2011 11:01:00 -0800 Subject: [ExI] A better option than fish oil? In-Reply-To: References: Message-ID: <4D2762EC.4090205@gnolls.org> On 1/7/11 4:00 AM, Sondre Bjell?s wrote: > The ocean ain't exactly a > pure ("clean") substance, a better alternative is plant-based oils. > Though a sales guy would say anything to sell his products, no matter > what real effect or fact or truth their products have. - Sondre On Thu, > Jan 6, 2011 at 8:39 AM, John Grigg wrote: >> > A better means of getting Omega-3's? What evidence do you have for this assertion? Consumer Reports found no measurable mercury in any fish oil capsules they tested. (There's a lot to say about that Mercola page, because many of its claims about krill vs fish oil are either false or suspect...but I'm still working on it and it'll take a while.) JS http://www.gnolls.org From spike66 at att.net Fri Jan 7 18:39:31 2011 From: spike66 at att.net (spike) Date: Fri, 7 Jan 2011 10:39:31 -0800 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> Message-ID: <007d01cbae9a$39803050$ac8090f0$@att.net> On Behalf Of Sondre Bjell?s >Let's move on to a false moral value: > "thou shalt not cast thy seed upon the ground" Sondre, the way you wrote this sounds like you quoted an ancient source. Which? Where? It isn?t anywhere in the christian bible. > This is regarding masturbation, of which people a long, long time ago looked upon as something nasty, evil and destructive to a persons life Indeed? If you refer to the story of Onan, his sin was not masturbation, but rather his intentionally failing to impregnate his late brother?s widow. See Genesis 38:9. Your argument can actually still be saved regarding a false and incorrect moral value, if you modify the text following your original comment to something along the lines of ?Ancient religions required the brother of the deceased man to impregnate his widow, so that the deceased would have heirs.? Clearly this is a failing of any society and belief system that would propagate such an egregious notion. I suppose it depends on how babelicious is one?s sister-in-law, but still. Unintentionally hilarious is the quote >"I will cast terror into the hearts of those who disbelieve. Therefore strike off their heads and strike off every fingertip of them" Oh NO, not the fingertips, anything but that! {8^D What order are these off-strikings to be done? Strike off their fingertips, so now they are in a lot of pain, then strike off their heads? That makes more sense than the reverse. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Jan 7 18:25:25 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 7 Jan 2011 11:25:25 -0700 Subject: [ExI] Lunar dirt Message-ID: On Fri, Jan 7, 2011 at 5:00 AM, Adrian Tymes wrote: snip > Silicon dioxide can be reacted with carbon - in coal or charcoal form - > and heat like so: SiO2 + C -> Si + CO2 I understand the chemistry, perhaps better than most having made a number of metals and working a lot of chemistry in my misspent youth. Also have spent serious time inside monster processing plants like a 30,000 ton per day concentrator, copper and aluminum smelters, and oil refineries. Plus a few power plants. The question I have is how remotely run robots relate to a processing plant able to do something serious. I.e., what are you proposing to *do* with them. > From: "spike" snip > The end game would be to create a rail launcher so that we could hurl lunar > soil into lunar orbit, and make stuff from it there. ?This task is harder > than any space engineering feat I have seen to date. Although space elevators may not be possible from earth, they can be built out through L1 with Spectra, currently used for dental floss. One able to lift a thousand tons per day (using a moving cable design) can probably be constructed for a lower mass budget than a lunar seed. It's still in the range of 100,000 tons, but it would lift it's own mass in 100 days. Lunar elevator pretty much displaces magnetic launchers of all kinds. That's a big change from the days of Dr. O'Neill. > From: Eugen Leitl > On Thu, Jan 06, 2011 at 06:21:14PM -0700, Keith Henson wrote: >> Will someone enlighten me about what remote manipulators on the moon >> are going to be doing? > > The first task would be exploration and mapping of the south > and north poles. The second part would be building large scale > thin-film PV arrays to mine volatiles and to build more > thin-film panels, and then to expand the industry base > until you can build linear motor launchers. You can't build lunar mass drivers just anywhere. Google achromatic orbits heppenheimer to see why. So you need a road or something from the poles to the Lunar equator. You also need a "catcher" which is a massive structure in its own right. >> You don't have a lot to work with; lunar dirt is about as far from >> useful objects as I can imagine. > > http://en.wikipedia.org/wiki/In-situ_resource_utilization snip That wasn't the question. Specifically what are you doing with the robots to construct something useful> > >> I have followed this topic since the mid 1970 and, far as I know, >> there was never a believable flow chart with rock going in and useful >> stuff coming out the other. > > Keith, I thought your knowledge of chemistry and geology was > better than this. snip It good enough, I think, to call BS on vague handwaving. >> Take solar cells. ?Anyone have an idea of what sort of plant it takes > > I have a very good idea of that, yes. I've even toured the facilities. The oil refinery like facilities where they purify the silicon? Where did you find one that would give you a tour? Or do you just mean the end stage where they mount cells? >> make silicon? ?What inputs the plant takes? ?What has to be frequently >> replaced? > > Current CdTe takes about 10 g/m^2. That's 100 m^2/kg. 10^5 m^2/ton. > At 10% and 1.3 kW/m^2, that's 13 kW/kg, 1.3 MW/100 kg, or 13 MW/ton. > > Assuming you deliver 100 kg packages, each will be good for over > a MW of power. What are you going to deposit the CdTe on? How do you make it? What do you use for wires to get the power from where you make it to where you use it? I am not saying it's impossible, just poorly thought out. Few numbers on power consumption, heat rejection, production rates, etc. And don't forget the scaling problems up *or* down. Keith From spike66 at att.net Fri Jan 7 19:12:56 2011 From: spike66 at att.net (spike) Date: Fri, 7 Jan 2011 11:12:56 -0800 Subject: [ExI] Morality In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> <4D274E4C.7010106@aleph.se> Message-ID: <008801cbae9e$e45a4ca0$ad0ee5e0$@att.net> ... On Behalf Of BillK ... >...Once the door is opened to intentionally kill one (or many) for the greater good, then this will almost certainly be misused by those in power to justify killing those they disapprove of. So it is safer to forbid it from the beginning, rather than getting into endless arguments about when it might be justified. Do we nuke Iran and kill a 100,000 to stop a war that would kill many millions? Just say No... BillK BillK, if only morality were this simple, life would be free of the maddening moral ambiguity we face every day. A better example is the 1993 mutual genocide between the Hutus and Tutsis of Rwanda and Burundi. If it had been as simple as one country vs another, we might have known what to do: establish a buffer zone between them, Korean style. But this was the Hutus slaying Tutsis simultaneously in both Rwanda and Burundi which caused a most perplexing moral situation on the part of the UN. That made it a simultaneous civil war in two neighboring nations; the UN is most reluctant to get involved in a civil war. The world watched helplessly in appalled horror, as the Toronto Blue Jays, a Canadian team, defeated the Philadelphia Phillies in the world series. We knew not what to do or what to say, but knew that a moral travesty was taking place between and among neighboring nations. We took the time-honored approach and did nothing. Years later, westerners began to hear of the murderous genocide which had taken place in Rwanda and Burundi, at which time we asked ourselves severely introspective moral questions, such as "Where the hell is Rwanda and Burundi?" OK suppose we had all been internet users back then, and actually heard of either of these events as they took place. What would we do? How does the rule Just Say No apply when we are witnessing genocide? That's what we did when we witnessed both genocide and slavery in Europe in the early 1940s, we just said no. That didn't work out so well from what I hear. spike From rtomek at ceti.pl Fri Jan 7 20:03:15 2011 From: rtomek at ceti.pl (Tomasz Rola) Date: Fri, 7 Jan 2011 21:03:15 +0100 (CET) Subject: [ExI] Morality (was: atheists declare religions as scams) In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: On Fri, 7 Jan 2011, John Clark wrote: > 1) A trolley is running out of control down a track. In its path are > five people who have been tied to the track by a mad philosopher. > Fortunately you could flip a switch, which will lead the trolley down a > different track saving the lives of the five. Unfortunately there is a > single person tied to that track. Should you flip the switch and kill > one man or do nothing and just watch five people die ? Maybe I shouldn't but I probably would. Unless, judging from five people positions and the fact that it only takes one to stop the trolley it would make more sense to do nothing... Unless I took some other thoughts on this, how about saving those who were younger or more sympathetic? Or maybe something else, like yelling loud. Depends on how much I would be prepared to flush away any rational thinking. I don't think morals & ethics apply here. > 2) As before, a trolley is hurtling down a track towards five people. > You are on a bridge under which it will pass, and you can stop it by > dropping a heavy weight in front of it. As it happens, there is a very > fat man next to you - your only way to stop the trolley is to push him > over the bridge and onto the track killing him to save five people. > Should you push the fat man over the edge or do nothing? My cynical me tells me, stay away from the fat man, because you weight less and if he has same thinking it will be him throwing you rather than the other way. So, step back. And again, step back, slowly. However, if at any time later I would become convinced that mad philosopher forced me into his mad experiment, I would probably set on finding him and kicking his ass very, very hard. Maybe he wouldn't learn much from this but I guess I would feel better. But if he had enough wits to tie five people, he would certainly learn, even if only a bit. Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From pharos at gmail.com Fri Jan 7 20:24:19 2011 From: pharos at gmail.com (BillK) Date: Fri, 7 Jan 2011 20:24:19 +0000 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <007d01cbae9a$39803050$ac8090f0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> <007d01cbae9a$39803050$ac8090f0$@att.net> Message-ID: 2011/1/7 spike wrote: > Indeed?? If you refer to the story of Onan, his sin was not masturbation, > but rather his intentionally failing to impregnate his late brother?s > widow.? See Genesis 38:9.? Your argument can actually still be saved > regarding a false and incorrect moral value, if you modify the text > following your original comment to something along the lines of ?Ancient > religions required the brother of the deceased man to impregnate his widow, > so that the deceased would have heirs.?? Clearly this is a failing of any > society and belief system that would propagate such an egregious notion.? I > suppose it depends on how babelicious is one?s sister-in-law, but still. > > The story of Onan is certainly the Biblical reference that Sondre meant. But I don't think the interpretation is as clear as you state. (What interpretation of ancient documents ever is?). :) In ancient times Jehovah was very keen on the tribe reproducing as much as possible. (Many references). And Onan failed in this respect. But the punishment for this behaviour wasn't death. See Deuteronomy 25:5-10, where a brother refuses to marry his dead brother's wife and produce children and is only punished by public humiliation. Onan enjoyed his brother's wife but refused to produce children by wasting his seed on the ground. So he deserved death for two reasons. Illegally enjoying the wife *and* contraception. So masturbation isn't directly mentioned as forbidden behaviour, except when it is used as a method of contraception. Not having as many children as possible is a sin in ancient times. But it is only a small step to say that masturbation should also be forbidden as a sin, because seed gets wasted instead of producing children. > > Unintentionally hilarious is the quote > >>"I will cast terror into the hearts of those who disbelieve. Therefore >> strike off their heads and strike off every fingertip of them" > > This quote is not a general instruction to Muslim believers. It is part of the story of the Battle of Badr, the first major battle between the Muslims and the Meccan pagans around 625 C.E. It was spoken to the Muslim troops to inspire them for the battle. This is exactly the same as all the ferocious orders to Israelite troops in Old Testament battles. 'kill all the men and take the women' etc. BillK From eugen at leitl.org Fri Jan 7 20:29:52 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 7 Jan 2011 21:29:52 +0100 Subject: [ExI] Lunar dirt In-Reply-To: References: Message-ID: <20110107202952.GS16518@leitl.org> On Fri, Jan 07, 2011 at 11:25:25AM -0700, Keith Henson wrote: > I understand the chemistry, perhaps better than most having made a > number of metals and working a lot of chemistry in my misspent youth. > Also have spent serious time inside monster processing plants like a > 30,000 ton per day concentrator, copper and aluminum smelters, and oil > refineries. Plus a few power plants. The question is one of scale. Bootstrap is about processing kg, not Gg. Once you can process kg/day you can consider how you process 100 kg/day, and after that 100 Mg/day. And only after then are you dealing with scales comparable to our large terrestrial facilities. > The question I have is how remotely run robots relate to a processing > plant able to do something serious. I.e., what are you proposing to > *do* with them. The initial stage is prospection. You build a modern version of the Lunochod, deploy hundreds of these, and let them cruise the terrain. Unlike Mars, turnaround for remote control is very quick. There's plenty more of power (twice the insolation). Abrasion is higher, so you have to expect shorter lifetime than Mars, unless you harden the vehicles for lunar specifics (dust, electrostatics, radiation). What interesting things could you do with small robots with centaur-like wheeled body plan, manipulators and nonphysical manipulators (electron and ebeam probes and manipulative probes)? I think quite a lot. They would carry material to and fro the central processing plant. They would prospect (mass spectroscopy, electron and ion beam, solid-state laser), transport parts (sintered and cast), directly pattern substrate (sputtering, XY-electron and ion beam forming) and such. Think of them like termites or ants, and the central facility if being the ant queen. The goal is for the swarm to grow until it can produce a new clone on adjacent terrain. The eventual goal is decentral control, emergent behaviour, in fact. The reason for that is maximum behaviour complexity from minimal agent complexity. > Although space elevators may not be possible from earth, they can be > built out through L1 with Spectra, currently used for dental floss. This is another point I wanted to make, but forgot. While space tethers and hooks on Earth must be from unobtainium, and have to deal with the atmosphere commercial aramide is enough for the Moon. > One able to lift a thousand tons per day (using a moving cable design) > can probably be constructed for a lower mass budget than a lunar seed. > It's still in the range of 100,000 tons, but it would lift it's own > mass in 100 days. > > Lunar elevator pretty much displaces magnetic launchers of all kinds. The question is one of power, the idea is to saturate the Moon surface with photovoltaics (alt.chrome.the.moon) and launchers, resulting in massive enough launch facility so that the Moon starts losing mass. > That's a big change from the days of Dr. O'Neill. > > > From: Eugen Leitl > > > On Thu, Jan 06, 2011 at 06:21:14PM -0700, Keith Henson wrote: > >> Will someone enlighten me about what remote manipulators on the moon > >> are going to be doing? > > > > The first task would be exploration and mapping of the south > > and north poles. The second part would be building large scale > > thin-film PV arrays to mine volatiles and to build more > > thin-film panels, and then to expand the industry base > > until you can build linear motor launchers. > > You can't build lunar mass drivers just anywhere. Google achromatic Right now we're still in early bootstrap. Early bootstrap is a lot different from late boostrap. > orbits heppenheimer to see why. So you need a road or something from > the poles to the Lunar equator. You also need a "catcher" which is a You would start with rings around the poles (because only small terrain features at the poles are semipermanently illuminated, so the amount of power you harvest is limited), with the rings incrementally expanding towards the equator. The question is how much money you would be able to sink into the venture until people start actually looking for ROI. > massive structure in its own right. You mean Earth-side? I don't see how, since you would launch powered packets, with enough guiding logic and propulsion on board for aim for aerobraking corridor, and then subsequent maneuvers. And, of course, you can just use plasma thrusters once you've inserted into low lunar orbit (with rocket corrections). > >> You don't have a lot to work with; lunar dirt is about as far from > >> useful objects as I can imagine. > > > > http://en.wikipedia.org/wiki/In-situ_resource_utilization > > snip > > That wasn't the question. Specifically what are you doing with the > robots to construct something useful> > > > >> I have followed this topic since the mid 1970 and, far as I know, > >> there was never a believable flow chart with rock going in and useful > >> stuff coming out the other. > > > > Keith, I thought your knowledge of chemistry and geology was > > better than this. > > snip > > It good enough, I think, to call BS on vague handwaving. The point is that there are hundreds of easily accessible papers which are anything but. > >> Take solar cells. ?Anyone have an idea of what sort of plant it takes > > > > I have a very good idea of that, yes. I've even toured the facilities. > > The oil refinery like facilities where they purify the silicon? Where > did you find one that would give you a tour? Or do you just mean the > end stage where they mount cells? No, it was a large facility (Wacker Burghausen) in mid 1980s. Of course that one was on Earth, and it was silicon-specific. On the Moon, you would heavily modify the parts or whole of the process. > >> make silicon? ?What inputs the plant takes? ?What has to be frequently > >> replaced? > > > > Current CdTe takes about 10 g/m^2. That's 100 m^2/kg. 10^5 m^2/ton. > > At 10% and 1.3 kW/m^2, that's 13 kW/kg, 1.3 MW/100 kg, or 13 MW/ton. > > > > Assuming you deliver 100 kg packages, each will be good for over > > a MW of power. > > What are you going to deposit the CdTe on? How do you make it? What A dumb approach would be to melt sifted regolith in situ with a large parabolic aluminized Mylar mirror, and sputter semiconductors on top. A less dumb approach would be to use sifting, then remove iron from regolith with magnetic separation and electrostatic separation, then reduce with hydrogen (electrolysis from polar cryotrap regolith), then crushing and another magnetic separation step, then melting it into sheet glass (why not float glass) or sinter plates on top of loose powder and then lift off, then sputter. Add electron beam for processing and ion beams for patterning. The iron you could form into foil, which would be also a good substrate for thin-film PV. This *is* handwaving, but I have a hunch by running a production facility for a year you will learn a lot more than in a decade worth of terra-side lunar simulators. > do you use for wires to get the power from where you make it to where First-gen metals will be from Terra, second-gen will be aluminium from kryolith-facilitated silica from regolith, iron, and whatever else you can make. > you use it? > > I am not saying it's impossible, just poorly thought out. Few numbers If I had a good plan I wouldn't be posting this to a public mailing list, but doing a lot of knob polishing with the usual suspects. This isn't my field, nor is this my project. Still, it is still heartening that people in the mainstram are at all seriously pushing ISRU, even as auxiliary/cost-saving methods for pure science and manned research missions. Once this stuff flies I have a hunch this tail will start wagging the dog. > on power consumption, heat rejection, production rates, etc. And > don't forget the scaling problems up *or* down. It is definitely a hard problem, and thankfully I'm not the one who has to figure it all out. There are thousands out there who eventually will. And we will watch, and marvel. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From anders at aleph.se Fri Jan 7 21:45:31 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 07 Jan 2011 22:45:31 +0100 Subject: [ExI] Morality In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> <4D274E4C.7010106@aleph.se> Message-ID: <4D27897B.1070108@aleph.se> BillK wrote: > I think what unsettles humanity about pushing the fat man in front of > the trolley is the virtually universal rule 'Thou shalt not kill' > (without a very very good reason). And also the Golden Rule -- nobody > wants to actually be the fat man in question. > Plus, it is up close and personal. If you rephrase the example as pulling a lever that causes the man to fall down, then many more will accept it. It is hard to say whether the framing effect or the assumption that actions will be repeated are the determining factor in people's reactions - it could be different from person to person. > Once the door is opened to intentionally kill one (or many) for the > greater good, then this will almost certainly be misused by those in > power to justify killing those they disapprove of. So it is safer to > forbid it from the beginning, rather than getting into endless > arguments about when it might be justified. Do we nuke Iran and kill a > 100,000 to stop a war that would kill many millions? Just say No. > This is basically rule utilitarianism - act according to rules you think will maximize the good on average. A kind of wussy but plausible compromise between the duties of deontology (don't do acts you do not wish to see turned into general rules) and the expectation maximization of consequentialism. It reduces the cognitive overhead by creating heuristic rules which are merely instrumental tools for acting better, not moral rules in themselves. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From sondre-list at bjellas.com Fri Jan 7 21:50:16 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 7 Jan 2011 22:50:16 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <007d01cbae9a$39803050$ac8090f0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> <007d01cbae9a$39803050$ac8090f0$@att.net> Message-ID: Thanks Spike for pointing out an obvious error in my argument :-) I was raised a Christian from childhood and this was something I learned early on, and from that basis I tried to search for some quotes from the bible and was to eager to continue writing my reply that I failed to check the validity of the quote. I found it with quotes, and figured it was a real quote and clearly it was not. The flaw in this lies in my childhood education as a Christian, and the answer lies in about.com: http://christianteens.about.com/od/whatthebiblesaysabout/f/masturbation.htm "Does the Bible talk about masturbation? Is there clear scripture that tells us if masturbation is right or wrong? While Christians debate the topic of masturbation, there is no scripture that directly mentions the act. Yet some Christians do refer to specific scripture that describes healthy and unhealthy sexual behavior in order to determine whether or not masturbation is a sin." Love the way you see the humor in the Qur'an ;-) Of which is more or less correct I think. There are other examples in the Ten Commandments, but much more elsewhere in the religious texts. One quick example: "Honor your father and your mother". Nobody should honor their parents just because they are our relatives. If your parents abuses you, why should you honor them? Thanks again! - Sondre 2011/1/7 spike > *On Behalf Of *Sondre Bjell?s > *?* > > > > >Let's move on to a *false moral value*: > > > > >?"thou shalt not cast thy seed upon the ground" > > > > Sondre, the way you wrote this sounds like you quoted an ancient source. > Which? Where? It isn?t anywhere in the christian bible. > > > > >?This is regarding masturbation, of which people a long, long time ago > looked upon as something nasty, evil and destructive to a persons life? > > > > Indeed? If you refer to the story of Onan, his sin was not masturbation, > but rather his intentionally failing to impregnate his late brother?s > widow. See Genesis 38:9. Your argument can actually still be saved > regarding a false and incorrect moral value, if you modify the text > following your original comment to something along the lines of ?Ancient > religions required the brother of the deceased man to impregnate his widow, > so that the deceased would have heirs.? Clearly this is a failing of any > society and belief system that would propagate such an egregious notion. I > suppose it depends on how babelicious is one?s sister-in-law, but still. > > > > Unintentionally hilarious is the quote > > > > >"*I will cast terror into the hearts of those who disbelieve. Therefore > strike off their heads and strike off every fingertip of them*" > > > > Oh NO, not the fingertips, anything but that! {8^D What order are these > off-strikings to be done? Strike off their fingertips, so now they are in a > lot of pain, then strike off their heads? That makes more sense than the > reverse. > > > > spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Sondre Bjell?s | Senior Solutions Architect | Steria http://www.sondreb.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Jan 7 21:43:56 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 7 Jan 2011 16:43:56 -0500 Subject: [ExI] Morality (was: atheists declare religions as scams) In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: On Jan 7, 2011, at 1:29 PM, Sondre Bjell?s wrote: > I don't see the problem with this moral example, and of course is #2 the worse one. In the first one, you are not inflicting death upon the single individual. Yes you are, you're killing one man to save 5. > In the second example, you are initiating physical force towards another human being, And in the first example I am also initiating physical force by moving that switch, resulting in the death of a human being. > The moral thing to do would allow the 5 people to die, while myself and the fatty survives. If the end result of morality is that more people suffer and die then morality would have no point and there would be little reason to be moral. > Moral values have to hold true to all contexts and not contradict each other People like to say things like that, and it might be nice if it were so, but it has never been found to be even close to the truth. In reality moral values NEVER hold true in all contexts and ALWAYS contradict each other. > Example: Is it morally right to use physical force towards other human beings if that will save some other human beings? Well I can only speak for myself but I'd be willing to step on a innocent person's big toe if that saved another person's life. > The moral truth of "you shall not initiate physical force" tells us that NO, we should not morally accept the killing of another human being. Not for two people, not for 5 people, not for thousand people and not even for a million people. Arithmetic is one of the very few things that we know to be true and consistent, the idea that it's OK to use this true and consistent thing on trivial matters, like making change, but we must never use it on important matters, like morality, makes absolutely no sense to me. I think one person dying is bad, two people dying is worse and three is even worse. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sondre-list at bjellas.com Fri Jan 7 22:33:21 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 7 Jan 2011 23:33:21 +0100 Subject: [ExI] Morality (was: atheists declare religions as scams) In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: 2011/1/7 John Clark > On Jan 7, 2011, at 1:29 PM, Sondre Bjell?s wrote: > > I don't see the problem with this moral example, and of course is #2 the > worse one. In the first one, you are not inflicting death upon the single > individual. > > > Yes you are, you're killing one man to save 5. > No, I am not killing anyone. I did not initiate the physical treat towards neither of those people. I'm not obligated to give all my money to poor people to save their lives, am I? > In the second example, you are initiating physical force towards another > human being, > > > And in the first example I am also initiating physical force by moving that > switch, resulting in the death of a human being. > You are not the initiating physical force, either the trolley started by accident or someone pushed it on purpose. If I only give a poor man $5 when he asks for help and the next day he is dead, frozen to death because he couldn't afford both drugs and food. Am I responsible in any way to his death, was I a contributing factor? Of course not. > > The moral thing to do would allow the 5 people to die, while myself and the > fatty survives. > > > If the end result of morality is that more people suffer and die then > morality would have no point and there would be little reason to be moral. > > People dies, that's a fact of life. Morality will improve our probability for survival and help us work well together in a society. > Moral values have to hold true to all contexts and not contradict each > other > > > People like to say things like that, and it might be nice if it were so, > but it has never been found to be even close to the truth. In reality moral > values NEVER hold true in all contexts and ALWAYS contradict each other. > > Had a discussion around this with my wife earlier tonight and life ain't all black and white, that's right. I don't have all the answers right now, I can see events where a certain moral value could be bent/broken, but I have yet to come up with one which will inflict negative result upon the receiver of any act violating true moral values. We discussed the following scenario: Let's say my brother is standing on a bridge about to take his own life. If I physically take him down from that bridge, I'm initiating physical force which is against his own will. I don't believe that most people who want to die have a mental disorder (almost baseless argument from some experience). My first reaction would be to stop him, but only to verify and understand his will to take his own life. If he truly wants to die, who am I to stop him? I'm reaching out a helping hand, not physically abusing my brother. In the end I think I realized that the scenario was not a contradiction after all, only if I kept my brother with force away from the bridge in the future. > Example: Is it morally right to use physical force towards other human > beings if that will save some other human beings? > > > Well I can only speak for myself but I'd be willing to step on a innocent > person's big toe if that saved another person's life. > > Of course anyone would, I would even steal from another person if it could save someones life. BUT, I would have to pay for my violations. The good deed of saving another persons life, doesn't invalidate other moral truths. Yet that is exactly what happens with governments, they violate all the moral truths that exists and never pays back for their evil doing. > The moral truth of "you shall not initiate physical force" tells us that > NO, we should not morally accept the killing of another human being. Not for > two people, not for 5 people, not for thousand people and not even for a > million people. > > > Arithmetic is one of the very few things that we know to be true and > consistent, the idea that it's OK to use this true and consistent thing on > trivial matters, like making change, but we must never use it on important > matters, like morality, makes absolutely no sense to me. I think one person > dying is bad, two people dying is worse and three is even worse. > > I did not say the opposite of what your saying. Arithmetic applies to the decision of reducing consequences of the scenario described, so it's logical and correct to do whatever you can to avoid deaths in the position (first scenario) where you are not directly responsible for their deaths. On the other hand (second scenario), you can't morally justify the killing of a fat guy to save five other people, which breaks the moral principle of not killing anyone. > John K Clark > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sondre-list at bjellas.com Fri Jan 7 22:39:06 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 7 Jan 2011 23:39:06 +0100 Subject: [ExI] A better option than fish oil? In-Reply-To: <4D2762EC.4090205@gnolls.org> References: <4D2762EC.4090205@gnolls.org> Message-ID: http://www.boston.com/news/science/articles/2010/06/25/jaw_dropping_levels_of_heavy_metals_found_in_whales/ - Sondre On Fri, Jan 7, 2011 at 8:01 PM, J. Stanton wrote: > On 1/7/11 4:00 AM, Sondre Bjell?s wrote: > >> The ocean ain't exactly a >> pure ("clean") substance, a better alternative is plant-based oils. >> Though a sales guy would say anything to sell his products, no matter >> what real effect or fact or truth their products have. - Sondre On Thu, >> Jan 6, 2011 at 8:39 AM, John Grigg wrote: >> >>> > A better means of getting Omega-3's? >>> >> > What evidence do you have for this assertion? Consumer Reports found no > measurable mercury in any fish oil capsules they tested. > > (There's a lot to say about that Mercola page, because many of its claims > about krill vs fish oil are either false or suspect...but I'm still working > on it and it'll take a while.) > > JS > http://www.gnolls.org > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Sondre Bjell?s http://www.sondreb.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Jan 7 23:00:14 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 07 Jan 2011 17:00:14 -0600 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <007d01cbae9a$39803050$ac8090f0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> <007d01cbae9a$39803050$ac8090f0$@att.net> Message-ID: <4D279AFE.7010400@satx.rr.com> On 1/7/2011 12:39 PM, spike wrote: > If you refer to the story of Onan, his sin was not masturbation, but > rather his intentionally failing to impregnate his late brother?s > widow. See Genesis 38:9. Your argument can actually still be saved > regarding a false and incorrect moral value, if you modify the text > following your original comment to something along the lines of ?Ancient > religions required the brother of the deceased man to impregnate his > widow, so that the deceased would have heirs.? Clearly this is a > failing of any society and belief system that would propagate such an > egregious notion. I'm always amazed at how few biblethumpers seem to know this. I feel that they should insist that their male flock obey this instruction of the Creator and spend a lot of effort flocking their widows-in-law and raising their kids. It's not just a good idea, it's God's Law! Damien Broderick From spike66 at att.net Sat Jan 8 00:12:09 2011 From: spike66 at att.net (spike) Date: Fri, 7 Jan 2011 16:12:09 -0800 Subject: [ExI] whoa! Message-ID: <003801cbaec8$b0d54450$127fccf0$@att.net> Earthquake! 4:11 PST, San Jose California. Hope everyone is OK. spike From atymes at gmail.com Sat Jan 8 00:30:01 2011 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 7 Jan 2011 16:30:01 -0800 Subject: [ExI] whoa! In-Reply-To: <003801cbaec8$b0d54450$127fccf0$@att.net> References: <003801cbaec8$b0d54450$127fccf0$@att.net> Message-ID: Not felt here in Mountain View. Apparently it was only 4.1 - you may have been near the epicenter to feel it. On Fri, Jan 7, 2011 at 4:12 PM, spike wrote: > Earthquake! ?4:11 PST, San Jose California. ?Hope everyone is OK. ?spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From natasha at natasha.cc Sat Jan 8 00:30:07 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 07 Jan 2011 19:30:07 -0500 Subject: [ExI] whoa! In-Reply-To: <003801cbaec8$b0d54450$127fccf0$@att.net> References: <003801cbaec8$b0d54450$127fccf0$@att.net> Message-ID: <20110107193007.bfrfyxww0w4ogsww@webmail.natasha.cc> Hold on! Quoting spike : > Earthquake! 4:11 PST, San Jose California. Hope everyone is OK. spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Sat Jan 8 00:19:38 2011 From: spike66 at att.net (spike) Date: Fri, 7 Jan 2011 16:19:38 -0800 Subject: [ExI] whoa! Message-ID: <004001cbaec9$bcd44ac0$367ce040$@att.net> They had the magnitude, 4.1 Richter, within 6 minutes of the shock. That probably wouldn't cause much mischief in an area accustomed to it. http://earthquake.usgs.gov/earthquakes/recenteqscanv/FaultMaps/San_Francisco .html As you were. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 8 00:19:38 2011 From: spike66 at att.net (spike) Date: Fri, 7 Jan 2011 16:19:38 -0800 Subject: [ExI] whoa! Message-ID: <004501cbaec9$bcddc0a0$369941e0$@att.net> Earthquake! Check this: http://earthquake.usgs.gov/earthquakes/recenteqscanv/FaultMaps/San_Francisco .html This is so cool, they already identified the location, but not the magnitude, within two minutes of the event. We are living in a dream world of information availability. Good luck to those living in south San Jose. I am up at the north end, and my Newtonmas tree didn't topple, nothing broken. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanobjc at gmail.com Sat Jan 8 00:37:56 2011 From: ryanobjc at gmail.com (Ryan Rawson) Date: Fri, 7 Jan 2011 16:37:56 -0800 Subject: [ExI] whoa! In-Reply-To: <004501cbaec9$bcddc0a0$369941e0$@att.net> References: <004501cbaec9$bcddc0a0$369941e0$@att.net> Message-ID: Coworker in Datacenter in SJC said shaking racks was not fun, but he wasnt injured. -ryan 2011/1/7 spike : > Earthquake!? Check this: > > > > http://earthquake.usgs.gov/earthquakes/recenteqscanv/FaultMaps/San_Francisco.html > > > > This is so cool, they already identified the location, but not the > magnitude, within two minutes of the event.? We are living in a dream world > of information availability. > > > > Good luck to those living in south San Jose. ?I am up at the north end, and > my Newtonmas tree didn?t topple, nothing broken. > > > > spike > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From ryanobjc at gmail.com Sat Jan 8 00:28:09 2011 From: ryanobjc at gmail.com (Ryan Rawson) Date: Fri, 7 Jan 2011 16:28:09 -0800 Subject: [ExI] whoa! In-Reply-To: <003801cbaec8$b0d54450$127fccf0$@att.net> References: <003801cbaec8$b0d54450$127fccf0$@att.net> Message-ID: It was just a 4.1, the more lasting effect is all the twitter/facebook comments :-) -ryan On Fri, Jan 7, 2011 at 4:12 PM, spike wrote: > Earthquake! ?4:11 PST, San Jose California. ?Hope everyone is OK. ?spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Sat Jan 8 01:08:04 2011 From: spike66 at att.net (spike) Date: Fri, 7 Jan 2011 17:08:04 -0800 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <4D279AFE.7010400@satx.rr.com> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> <007d01cbae9a$39803050$ac8090f0$@att.net> <4D279AFE.7010400@satx.rr.com> Message-ID: <005b01cbaed0$8164d250$842e76f0$@att.net> ... On Behalf Of Damien Broderick Subject: Re: [ExI] atheists declare religions as scams. On 1/7/2011 12:39 PM, spike wrote: >> If you refer to the story of Onan, his sin was not masturbation, but rather his intentionally failing to impregnate his late brother?s >> widow. See Genesis 38:9... Clearly this is a failing of any society and belief system that would propagate such an egregious notion. >...I'm always amazed at how few biblethumpers seem to know this. I feel that they should insist that their male flock obey this instruction of the Creator and spend a lot of effort flocking their widows-in-law and raising their kids. It's not just a good idea, it's God's Law!...Damien Broderick Hmmm, a good theologian who knows her shit could easily find a way out of this philosophical bind. She would argue that while recognizing Onan's story refers to the screw-the-sister-in-law rule and even heaps scorn on one who disobeyed it, the actual command to impregnate one's brother's widow is not actually found anywhere in modern scriptures. For that reason, we might extrapolate that it isn't applicable today. Further, there may have even been some logic in that notion in the old days, when women were not allowed to own property, as it was even in the west until surprisingly recently, and still is in some places today. If Onan's sister in law had only daughters, he might have intentionally prevented her having a male heir, so that he (Onan) could inherit his brother's property. Another motive I thought of is that his sister-in-law was knockout gorgeous, and as long as she didn't conceive an heir, it was his fucking duty to keep trying. And trying. And trying. spike From spike66 at att.net Sat Jan 8 02:32:11 2011 From: spike66 at att.net (spike) Date: Fri, 7 Jan 2011 18:32:11 -0800 Subject: [ExI] mass transit again Message-ID: <006c01cbaedc$41a60ab0$c4f22010$@att.net> This is an example of what I mentioned a few days ago about being a way bigger threat to society than is global warming, this bigger threat is feral humans: http://www.cnn.com/2011/CRIME/01/07/station.videotaped.incident/index.html?h pt=T2 Last week it was this: http://www.bradenton.com/2010/12/27/2837235/manatee-sheriff-couple-attacked. html It is a reason why I think most public transit notions are a dead end. Individual cars serve as suits of armor, providing a defensive barrier. The challenge is to invent a form of public transit which somehow protects the unarmed prole from other proles. If we build the infrastructure correctly, the ferals will devour each other, and the rest of us can go about our business. In the long run I am thinking something like the old tech ski lift technology is the way to move proles in the big city: moving cable, with the option of riding alone in an individual car or with as many as four riders per car. I don't know the exact mechanism, but it should be at least possible to create a car that individually transfers from one moving cable to another, so that one need only enter the coordinates of the city block where one wants to end, and the rest is mechanized. This does away with most parking lots, or rather moves them out where there is plenty of room and the theoretical possibility of making the parking lots safe. It could be that I am overly focused on the feral humans thing. Damien's book Transcension has Dr. Malik being slain by ferals in a most memorable and disturbing passage. Fortunately he was frozen and we eventually saved by that technology. I propose avoiding all possible contact with them. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtomek at ceti.pl Sat Jan 8 05:20:56 2011 From: rtomek at ceti.pl (Tomasz Rola) Date: Sat, 8 Jan 2011 06:20:56 +0100 (CET) Subject: [ExI] Morality In-Reply-To: <008801cbae9e$e45a4ca0$ad0ee5e0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> <4D274E4C.7010106@aleph.se> <008801cbae9e$e45a4ca0$ad0ee5e0$@att.net> Message-ID: On Fri, 7 Jan 2011, spike wrote: > BillK, if only morality were this simple, life would be free of the > maddening moral ambiguity we face every day. [...] > Years later, westerners began to hear of the murderous genocide which had > taken place in Rwanda and Burundi, at which time we asked ourselves severely > introspective moral questions, such as "Where the hell is Rwanda and > Burundi?" I'm afraid this wasn't quite so. Google +clinton +rwanda and you can get to pages like this: http://www.guardian.co.uk/world/2004/mar/31/usa.rwanda http://www.theatlantic.com/past/docs/issues/2001/09/power.htm So, there was plenty of information. Just not for the public. What the public had was downplaying events while they were going on. After that, well why not say sorry, it doesn't cost much. As of "there are no natural resources there" - hard to believe, such a big coordinated bloodfest for nothing at all. > OK suppose we had all been internet users back then, and actually heard of > either of these events as they took place. What would we do? How does the > rule Just Say No apply when we are witnessing genocide? That's what we did > when we witnessed both genocide and slavery in Europe in the early 1940s, we > just said no. That didn't work out so well from what I hear. Oh, it seems you are very much misinformed. From what I have heared, US didn't say "no", instead US said nothing at all. Here is a rather lenghty and very boring another point of view. Why I serve it to you, if I think it is boring - why, I am well known sadist, of course. It comes from the book "Jan Karski" by Yannick Haenel. The book is a bit controversial in France, I hear, but the following words are quite consistent (IMHO) with what I know and how I imagine those things could have been told by Karski (maybe with some emotions removed, but on the other hand, I am not very emotional). First comes my transcription from Polish Radio programme about the book: http://www.polskieradio.pl/8/529/Artykul/273441,Jan-Karski-w-oczach-pisarzy-francuskich Transcription is in Polish, and I include it here for archival reasons. Next comes my translation of it. ----- "Przyzwolono na eskterminacje Zydow. Nikt nie probowal jej przerwac, nikt nawet nie chcial probowac. Nie uwierzono mi, gdy w Londynie i Waszyngtonie powtarzalem zadania ludzi z warszawskiego getta. Nikt mi nie uwierzyl, bo nikt nie chcial mi uwierzyc. Ciagle jeszcze widze twarze tych wszystkich, do ktorych mowilem. Doskonale pamietam ich zaklopotanie. Byl rok 42-gi. Czy byli rownie zaklopotani trzy lata pozniej, gdy odkryto obozy zaglady? Wiem, ze proklamujac zwyciezcow i oglaszajac ich wygrana tryumfem wolnego swiata, nie czuli zazenowania. Jak swiat, ktory pozwolil na zaglade Zydow moze tytulowac sie wolnym? Jak smie twierdzic, ze cokolwiek wygral? W 45-tym roku nie bylo zwyciezcow, byli tylko wspolwinni i klamcy. Gdy mowilem Anglikom, ze w Polsce trwa eksterminacja Zydow, gdy w nieskonczonosc deklamowalem przed Amerykanami to samo poselstwo, slyszalem w odpowiedzi ze to niemozliwe, ze nikt nie mialby tak duzej wladzy czy nawet pomyslu, by zgladzic miliony ludzi. Sam Roosevelt nie kryl zdziwienia w mojej obecnosci, ale to jego zdziwienie bylo zwyklym oszustwem. Wszyscy wiedzieli ale udawali, ze nie wiedza. Odgrywali ignorantow, bo ignorancja byla dla nich korzystna. Jej podsycanie lezalo w ich interesie. A przeciez tajne sluzby dobrze wykonywaly swa prace. Wiedziano o tym i wszyscy ci, ktorzy twierdzili, ze nie wiedza, pracowali juz na rzecz klamstwa. Przeczytalem wszystko, co powstalo na ten temat od zakonczenia wojny. Anglicy byli poinformowani, Amerykanie byli poinformowani. Nie wiedzac o sprawie, nie probowali przerwac procesu zaglady Zydow w Europie. Moze uwazali, ze nie trzeba go przerywac? Moze nie trzeba bylo dawac europejskim Zydom szansy na ocalenie? Tak czy inaczej, sprawny przebieg eksterminacji wynikal z faktu, ze Alianci udawali, iz o niczym nie wiedza. Dlatego wychodzac ze spotkania z Rooseveltem 28-ego lipca 1943r. zrozumialem, ze wszystko stracone. Europejscy Zydzi gineli, jeden po drugim, mordowani przez nazistow przy biernym wspoludziale Anglikow i Amerykanow. Usiadlem na lawce przed Bialym Domem, na skwerze Bohaterow Niepodleglosci, posrod pieknych cedrow i krzewow akacji i owiewany zapachem laurowcow patrzylem przez kilka godzin, jak rozpada sie swiat. Zrozumialem, ze nigdy nie uda sie poruszyc sumienia swiata, choc tak bardzo pragneli tego dwaj mezczyzni z warszawskiego getta. Zrozumialem, ze samo pojecie sumienia swiata juz nie istnieje. Wszystko sie skonczylo. Swiat wkraczal w epoke, w ktorej juz nic nie mialo stac na przeszkodzie zniszczeniu, bo stawianie oporu niszczycielom przestalo przynosic korzysci. Zniszczenie mialo sie dokonywac coraz jawniej, nie napotykajac zadnych granic. I nie istnialo juz zadne dobro, ktore by moglo przeciwstawic sie zlu, istnialo juz tylko zlo, wszedzie. Roosevelt dzielil sie ze mna wspaniala wizja przyszlosci, w ktorej ludzkosc nie bedzie dopuszczac do nastepnych wojen i obali nawet sama idee wojny. On, jak wielu innych, tak pochopnie mowil o tym, co stanie sie po wojnie. Tymczasem wojna kazdego dnia dziala sie na naszych oczach. Roosevelt chcial przede wszystkim uchylic sie od odpowiedzialnosci. Siedzialem na lawce, na skwerze Bohaterow Niepodleglosci i chcialo mi sie wymiotowac. Mdlosci uratowaly mi kilka razy zycie, ale tym razem nie przychodzily z pomoca. Siedzialem na lawce przez kilka godzin, opatulony wojskowym plaszczem. Tuz po moim przybyciu na lotnisko w Nowym Jorku ktos zarzucil mi na ramiona ten plaszcz jak derke, ktora przykrywa sie konia po dlugiej, wygranej gonitwie. W Bialym Domu zaczely rozswietlac sie okna a ja zrozumialem, ze zbawienie nie nadejdzie, ze nie nadejdzie juz nigdy, ze samo pojecie zbawienia jest martwe. Gdy rok pozniej wybuchlo Powstanie Warszawskie, Polacy do konca wierzyli ze Anglicy, Amerykanie i Rosjanie przybeda im na pomoc. A ja od 28-ego lipca 1943r wiedzialem, ze nie zrobia nic. Tamtego popoludnia dotarlo do mnie, ze Warszawa zostanie opuszczona dokladnie tak, jak Polska zostala opuszczona we wrzesniu 1939-ego i tak, jak zostali opuszczeni Zydzi z Polski, Niemiec, Holandii, Francji, Belgii, Norwegii, Grecji, Wloch, Zydzi z Chorwacji, Bulgarii, Austrii, z Wegier, z Rumunii i z Czechoslowacji. Z jednej strony zaglada a z drugiej opuszczenie. Na nic wiecej nie mozna bylo miec nadziei. To byl program przyszlego swiata i ten swiat rzeczywiscie nadszedl. Wszyscy odczulismy to osamotnienie i nadal je odczuwamy. Wlasnie wtedy zaczalem cierpiec na absolutna bezsennosc. Nie spie od 28-ego lipca 1943r, od ponad 50-ciu lat. Nie moge zasnac, bo slysze glosy dwoch mezczyzn z warszawskiego getta. Kazdej nocy slysze ich przeslanie, rozbrzmiewa ono w mojej glowie. Cos, czego nikt nie chcial slyszec, od 50-ciu lat w nieskonczonosc zakloca mi sen." ----- Here transcription ends. Translation (mine, rough and full of all kind of errors): ----- "There was consent for Jews' extermination. Nobody tried to stop it, nobody even wanted to try. I wasn't believed when in London and Washington I recited demands of people from Warsaw ghetto. Nobody believed me, because nobody wanted to believe me. I still recall faces of all those to whom I delivered my speaches. I flawlessly recall their bafflement. It was year 42nd. Were they equally baffled three years later, when concetration camps had been discovered? I know, that when proclaiming winners and calling their victory "a triumph of the free world" they didn't feel discomfort. How a world that allowed for killing of Jews can name itself free? How dare it claim to win anything? In 45th there were no winners, just accomplices and liars. When I was telling Englishmen that there was Jews' extermination going on in Poland, when I was ad infinitum declaiming in front of Americans the very same legation, all I heard was that it was impossible, that nobody had power so big or could even have an idea of killing millions of people. Roosevelt himself showed astonishment to me, but his astonishment was just a trick. They all knew, but pretended to not know. They played ignorance, because ignorance was beneficial to them. Playing it up was in their interest. But all secret services were doing their job all right. It was all known and those claiming to not knowing were already labouring for a ghost story. I have read everything written about it since the war's end. Englishmen had been informed, Americans had been informed. Not knowing, they didn't try to stop the process of Jews' extermination in Europe. Maybe they thought that there was no need to stop it? Maybe European Jews didn't deserve a chance to live? One way or another, smooth extermination was enabled by Allies pretending to know nothing. This is how, going out of meeting with Roosevelt on July 28th, 1943, I realised it was all over. European Jews were dying one by one, murdered by nazis with passive cooperation of Great Britain and USA. I have seated myself on a bench on Lafayette Square, among beautiful cedars and acacia shrubs, windblown with laurel flavour, the next few hours I spent watching the world fall apart. I realised there would never be possible to move conscience of the world, even thou two men from Warsaw ghetto desired this so much. I realised there was no more such thing as conscience of the world. Everything was over. World entered an epoch in which nothing would stand in a way of destruction, because opposing destructors was no longer lucrative. Destruction was to go more and more openly, not coming to any limits. And there was no more the good that could oppose the evil, from this moment there was only evil, everywhere. Roosevelt was sharing with me a wonderful vision of a future, when humanity would no longer allow wars and even the idea of war would be abandoned. He, like many others, was so quick to talk about world after the war. In a meantime, war was going on everyday as we were watching. Roosevelt didn't want responsibility. I was seating on a bench on Lafayette Square and I wanted to vomit. Nausea saved my life few times before, but this time it stayed away. I sat on a bench for few hours, muffled in battledress. Soon after my coming to New York someone covered me with this battledress, like they cover a horse after long, victorious run, with a blanket. In a White House windows started to lighten up and I understood that salvation was not going to come, never, that idea of salvation is empty. Year later the Warsaw Uprising started and Poles to the very end believed that Englishmen, Americans and Russians would come to help them. But I, from the day 28th July, 1943 knew that they would do nothing. On this afternoon it came to me that Warsaw would be let go, just like Poland in September of 1939 and just like Jews from Poland, Germany, Netherlands, France, Belgium, Norway, Greece, Italy, Croatia, Bulgaria, Austria, Hungary, Romania and Czechoslovakia had all been abandoned. From one side, destruction, from the other side, abandonment. There wasn't hope for anything more. This was a plan for future world and this world indeed had come. We have all felt this solitude and we still feel it. This was when my insomnia started - I cannot sleep since the day 28th July, 1943, for over 50 years. I cannot sleep because I hear voices of two men from Warsaw ghetto. I hear their message every night, it echoes in my head. Something that nobody wanted to hear, from 50 years disturbs my sleep." ----- And here translation ends, too. As I said, fictionalised Karski says his message in a very emotional tone and it doesn't make me feel very comfortable... But maybe this is actually how it should be. His grim vision of western attitude is quite right, but needs some correction. Those are: [ http://en.wikipedia.org/wiki/List_of_individuals_and_groups_assisting_Jews_during_the_Holocaust ] On this list, there is even Albert Goring (brother of Hermann, Luftwaffe Commander) but there is almost no mention of US Americans (ditto for Britons). Careful reading of one diplomat's biogram can give very interesting insights: [ http://en.wikipedia.org/wiki/Hiram_Bingham_IV ] Just in case, if anybody still wants to read: [ http://en.wikipedia.org/wiki/Jan_Karski ] So, all in all, few individuals from US and UK contributed but there wasn't (as it seems) any kind of state level action, word of support or anything like this. What US was interested in, it was "the best of refugees" - educated, scientists with achievements etc. The rest was left out. Perhaps this was justified but I really need to dive deeper into this shit and I do this only for my own pleasure. Of course, the problem was much bigger one - besides 3 million Jews, there was for example 2.5 million other Polish citizens (like, ethnic Poles, Ukrainians and Belarussians) killed by nazis or soviets (this does not include military deaths, only civilians). Overally, more than 5.5 million Polish citizens. And it was sure thing that after dealing with Jews, nazis would go on and deal with the rest of us. In Soviet Union, nazis killed about 14 million civilians (some of those could have died from very harsh conditions, hunger and illness but certainly not all) and about million Jews. So they were quite capable when it came to mass killing. I would say, history seems to repeat itself in a way. As of yet, lessons not necessarily had been learned. Citing Jan Karski again, from Marek Edelman's talk about him in 2000 in Polish PEN Club: "Polityka ma byc pragmatyczna ale musi byc moralna, pragmatyczna i moralna. Jezeli bedzie niemoralna to przegra." Translation (mine): "Politics has got to be pragmatic but it has to be moral, pragmatic and moral (at the same time). If it is immoral, it will fail." I strongly share this point of view. Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From jonkc at bellsouth.net Sat Jan 8 07:05:15 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 8 Jan 2011 02:05:15 -0500 Subject: [ExI] Computer jokes In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> <4D274E4C.7010106@aleph.se> <008801cbae9e$e45a4ca0$ad0ee5e0$@att.net> Message-ID: Tomasz Rola wrote: > A C programmer asked whether computer had Buddha's nature. > As the answer, master did "rm -rif" on the programmer's home > directory. And then the C programmer became enlightened... "I teach UNIX" said a programer. "Oh, that's great," was the reply, "What do you teach them?" Unix is user-friendly. It's just very selective about who its friends are. There are 10 types of people in the world: those who understand binary, and those who don't. Bad at math? Call 1-800-[(10x)(ln(13e))]-[sin(xy)/2.362x] A bad random number generator: 1, 1, 1, 1, 1, 4.39*10^42, 1, 1, 1 How do I love thee? My accumulator overflows. To be, or not to be, those are the parameters. Daddy, what does FORMATTING DRIVE C mean? f u cn rd ths, u cn gt a gd jb n cmptr prgrmmng. Isn't it odd that all the members of the Association for Computing Machinery are human? What this country needs is a good five-cent microcomputer. One picture is worth 128K words. Cretin and UNIX both start with C. Press any key...no, no, no, NOT THAT ONE! Access denied--nah nah na nah nah! Cannot find REALITY.SYS. Universe halted. Relax, its only ONES and ZEROS! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 8 05:50:53 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 8 Jan 2011 00:50:53 -0500 Subject: [ExI] Morality In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: On Jan 7, 2011, at 5:33 PM, Sondre Bjell?s wrote: > People dies, that's a fact of life. So if you can't save everybody then don't save anybody? > Morality will improve our probability for survival and help us work well together in a society. Certainly not in this case! If I do what you recommend, if I do what you say is the moral action it will cause more death and misery than if I do the immoral thing. So I am immoral and proud of it. > you can't morally justify the killing of a fat guy to save five other people My gut very strongly tells me that too, but my brain tells me that it is justified to kill one guy to save 5 other people, and if that's not moral then to hell with morality. In a real situation I don't know if I would have the strength to resist my gut instinct, but I do know that if I was on a jury judging somebody who had I would vote not guilty. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Sat Jan 8 16:00:54 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 8 Jan 2011 08:00:54 -0800 (PST) Subject: [ExI] Fw: Re: simulation as an improvement over reality Message-ID: <715646.12049.qm@web114420.mail.gq1.yahoo.com> > > > Your argument is the same > as the old one about a person who, after being copied, > should be quite happy to shoot himself. ?Naturally that is > silly. ?Nobody would be happy to shoot themselves, > regardless of how many identical copies of them were in > existence. > > > > Unless the identical copies are in lockstep and > shooting oneself will > > leave at least one of the copies running. In that > case, the stream of > > consciousness of the terminated copies would continue > uninterrupted. > > Weeeelll... If one of a set of copies in lock-step shoots itself, so do all the others! Ben Zaiboc From bbenzai at yahoo.com Sat Jan 8 16:04:09 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 8 Jan 2011 08:04:09 -0800 (PST) Subject: [ExI] Fw: Re: atheists declare religions as scams. Message-ID: <504077.46907.qm@web114418.mail.gq1.yahoo.com> "spike" wrote: > > > ... On Behalf Of Stathis Papaioannou > > ... > > > > >...There is no contradiction in accepting a > religion but rejecting its ancient beliefs as literal truth. > Some theologians take the Bible about as seriously as > classical scholars take the Iliad and the Odyssey; that is, > they take it very seriously but they don't actually believe > that any of the supernatural stuff happened...Stathis > Papaioannou > > > > > > Stathis I think this is an understatement.? I > would say *most* theologians, particularly the > fundamentalist variety, discount the supernatural.? I > might be projecting, but I don't see how they could miss > that if they really study the hell out of the bible. > > > > Hmmm, study the hell out of the bible, that has a > delightful double meaning. > > Wait, what? If they are fundamentalist theologians, how can they discount the supernatural??? I thought all theologians started with the assumption that a god exists?? (i.e. the biggest supernatural proposition of all) Of course, I don't really see how theologians stay theologians.? Studying the bible tends to get rid of not just hell, but heaven and god as well. Ben Zaiboc From hkeithhenson at gmail.com Sat Jan 8 16:36:53 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 8 Jan 2011 09:36:53 -0700 Subject: [ExI] Morality Message-ID: On Sat, Jan 8, 2011 at 5:00 AM, Tomasz Rola wrote: > On Fri, 7 Jan 2011, spike wrote: (Discussion of Rwanda and the Jewish genocide) You missed the Cambodian and Armenian genocides. In early historical times we have a description of a genocide in Book of Numbers, King James version Chapter 31 verses 7-16 When the water came up at the end of the last ice age and the carrying capacity of eastern Australia was greatly reduced there was a genocide that left fields of bones. I don't like saying so, but genocides (and wars) are a *feature* of the human species, not a bug. We are top predators and the ultimate way top predator numbers are controlled is predation by other members of the species. (Talk to lions about this.) But a top predator killing other top predators is a risky business, or at least it was during our long evolution in bands and little tribes. So we don't do it unless the downside (historically looming starvation) is worse. There is also nothing in our genetic selection in the EEA that makes us concerned about the fate of unrelated people far away. The fact we are concerned at all is memetic. So if you don't want the social pressures that lead to wars and genocides, figure out how to keep the income per capita rising (or at least not falling) and the prospects for the future looking good. Keith From atymes at gmail.com Sat Jan 8 17:44:10 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 8 Jan 2011 09:44:10 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <504077.46907.qm@web114418.mail.gq1.yahoo.com> References: <504077.46907.qm@web114418.mail.gq1.yahoo.com> Message-ID: On Sat, Jan 8, 2011 at 8:04 AM, Ben Zaiboc wrote: > Wait, what? > > If they are fundamentalist theologians, how can they > discount the supernatural? Simple: they (try to) declare themselves outside the rules of logic. Causality, completeness, and so forth only apply to them if and when they wish it to. That their arguments tend to crumble when rigorous rules of logic are applied, as you and I do and as reality does with anything that can actually be tested, is something they try to gloss over. From stefano.vaj at gmail.com Sat Jan 8 18:56:04 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 8 Jan 2011 19:56:04 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <005701cbadc9$952ff150$bf8fd3f0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <005701cbadc9$952ff150$bf8fd3f0$@att.net> Message-ID: 2011/1/6 spike > Regarding cultural christians, John where would you rather live, if you had > to choose one: Calcutta, Tehran, or Salt Lake City? > Calcutta. At least with plenty of money at hand. But the comparison is not really fair, since there is much better than Calcutta in India, and much worse than SLC in the US of A. Tehran, btw, is not that bad either. Ever been there? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Jan 8 19:53:50 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 8 Jan 2011 14:53:50 -0500 Subject: [ExI] Fw: Re: simulation as an improvement over reality In-Reply-To: <715646.12049.qm@web114420.mail.gq1.yahoo.com> References: <715646.12049.qm@web114420.mail.gq1.yahoo.com> Message-ID: On Sat, Jan 8, 2011 at 11:00 AM, Ben Zaiboc wrote: > ?Weeeelll... > > ?If one of a set of copies in lock-step shoots itself, so do > ?all the others! That is true of shared volition. What if we externally shoot one of the copies? Does that break the synchronization? Are two identities created by our act of attempting to kill one of the redundancies of a primary identity? (assuming a non-fatal shot) "I'm sorry, you were part of a union that comprised a redundant identity but due to a system malfunction aka bullet through processing unit #7 you came into existence. It's morally difficult to turn you off as a mere defect since you are still aware enough to object. Your first order of self-preservation will be to secure a redundant copy of your own (now unique) process." From eugen at leitl.org Sat Jan 8 08:54:54 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 8 Jan 2011 09:54:54 +0100 Subject: [ExI] whoa! In-Reply-To: References: <004501cbaec9$bcddc0a0$369941e0$@att.net> Message-ID: <20110108085454.GF16518@leitl.org> On Fri, Jan 07, 2011 at 04:37:56PM -0800, Ryan Rawson wrote: > Coworker in Datacenter in SJC said shaking racks was not fun, but he > wasnt injured. I felt a great disturbance in the Net, as if millions of heads suddenly touched down on rotating platters, and were suddenly silenced. I fear something terrible has happened. From rafal.smigrodzki at gmail.com Sat Jan 8 22:04:19 2011 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 8 Jan 2011 17:04:19 -0500 Subject: [ExI] beating a dead horse Message-ID: Can't help myself: Lubos Motl just posted an analysis of the maximum climate sensitivity to CO2: http://motls.blogspot.com/2011/01/climate-sensitivity-from-linear-fit.html#more From spike66 at att.net Sat Jan 8 22:50:48 2011 From: spike66 at att.net (spike) Date: Sat, 8 Jan 2011 14:50:48 -0800 Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona Message-ID: <002401cbaf86$7e92a7d0$7bb7f770$@att.net> This guy was seriously up-messed: http://www.youtube.com/profile?user=Classitup10 &annotation_id=annotation_564778&feature=iv#p/u/4/E8Wr6AeZTCE {8-[ http://www.cnn.com/2011/CRIME/01/08/arizona.shooting/index.html?hpt=T1 &iref=BN1 Best wishes for a full recovery to Rep. Giffords and the other injured bystanders. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 10 16:39:52 2011 From: spike66 at att.net (spike) Date: Mon, 10 Jan 2011 08:39:52 -0800 Subject: [ExI] one way trip to mars Message-ID: <002001cbb0e5$01d49720$057dc560$@att.net> First they laugh at you. Then they argue. Then they write Foxnews articles: http://www.foxnews.com/scitech/2011/01/10/space-volunteer-way-mission-mars/? test=latestnews Note in the archives I concluded a trip to Mars would be a one-way, this at least a dozen years ago. I also suggested at the time that there would be *plenty* of volunteers, no shortage. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Tue Jan 11 09:16:49 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 11 Jan 2011 01:16:49 -0800 (PST) Subject: [ExI] NYT reports criticisms of Precognition article Message-ID: <955466.42170.qm@web114407.mail.gq1.yahoo.com> --- On Sat, 8/1/11, Ben Zaiboc wrote: From: Ben Zaiboc Subject: Fw: Re: [ExI] NYT reports criticisms of Precognition article To: extropy-chat at lists.extropy.org Date: Saturday, 8 January, 2011, 16:02 Stathis Papaioannou observed: > On Fri, Jan 7, 2011 at 4:48 AM, Adrian Tymes > > wrote: > > > That's not how I read it. ?I think they're > > saying, the analysis fails > > to evaluate the > > probability that ESP does not exist, and > > only > > evaluates the probability that it > > does. > > Don't the two probabilities necessarily add up to > 1, > given that either > ESP does or does not exist? You see, Stathis, that's the kind of closed-minded thinking> that is hampering ESP research! Open your mind, man! (Further. Further. Just a bit more Yeah, that'll do!) Ben Zaiboc From eugen at leitl.org Tue Jan 11 10:03:14 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 11 Jan 2011 11:03:14 +0100 Subject: [ExI] one way trip to mars In-Reply-To: <002001cbb0e5$01d49720$057dc560$@att.net> References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: <20110111100314.GG16518@leitl.org> On Mon, Jan 10, 2011 at 08:39:52AM -0800, spike wrote: > Note in the archives I concluded a trip to Mars would be a one-way, this at > least a dozen years ago. I also suggested at the time that there would be > *plenty* of volunteers, no shortage. The nice thing about robots, they're always one-way (unless you want to return samples). Nor need the lifters to be man-rated, and they do not need to be extremely reliable. Now what this solar system needs is a mesh network of DTN routers. A router in every orbit, a robot on every planet. From possiblepaths2050 at gmail.com Tue Jan 11 08:02:04 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 11 Jan 2011 01:02:04 -0700 Subject: [ExI] mass transit again In-Reply-To: <006c01cbaedc$41a60ab0$c4f22010$@att.net> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> Message-ID: Spike, I ride the city bus on a regular basis and despite once getting into a heated exchange with one unpleasant & unbalanced fellow, I normally find lots of decent people (human beings, not "proles") who simply want to get to their destination without any criminal drama. And every time I see a young male give up his seat to an elderly person or someone who is handicapped, I feel there is hope for us all! : ) Do you really want to live in an "Oath of Fealty" kind of world? I realize that was not your point, but it is where you are headed in your line of thinking... Best wishes, John On 1/7/11, spike wrote: > This is an example of what I mentioned a few days ago about being a way > bigger threat to society than is global warming, this bigger threat is feral > humans: > > > > http://www.cnn.com/2011/CRIME/01/07/station.videotaped.incident/index.html?h > pt=T2 > > > > Last week it was this: > > > > http://www.bradenton.com/2010/12/27/2837235/manatee-sheriff-couple-attacked. > html > > > > It is a reason why I think most public transit notions are a dead end. > Individual cars serve as suits of armor, providing a defensive barrier. The > challenge is to invent a form of public transit which somehow protects the > unarmed prole from other proles. If we build the infrastructure correctly, > the ferals will devour each other, and the rest of us can go about our > business. > > > > In the long run I am thinking something like the old tech ski lift > technology is the way to move proles in the big city: moving cable, with the > option of riding alone in an individual car or with as many as four riders > per car. I don't know the exact mechanism, but it should be at least > possible to create a car that individually transfers from one moving cable > to another, so that one need only enter the coordinates of the city block > where one wants to end, and the rest is mechanized. This does away with > most parking lots, or rather moves them out where there is plenty of room > and the theoretical possibility of making the parking lots safe. > > > > It could be that I am overly focused on the feral humans thing. Damien's > book Transcension has Dr. Malik being slain by ferals in a most memorable > and disturbing passage. Fortunately he was frozen and we eventually saved > by that technology. I propose avoiding all possible contact with them. > > > > spike > > > > > > > > > > From atymes at gmail.com Tue Jan 11 08:42:00 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 11 Jan 2011 00:42:00 -0800 Subject: [ExI] one way trip to mars In-Reply-To: <002001cbb0e5$01d49720$057dc560$@att.net> References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: Volunteers, yes. Qualified volunteers? Not as many. People who don't run a very high risk of messing things up and turning it into a disaster, which could be foreseen way in advance? Per the article: "There will be tremendous public and political opposition from many members of the public to a mission which can only end in death." In so far as the mission takes any degree of public (government) resources, which a near future mission to Mars is likely to do, it is the public's and politicians' place to object. 2011/1/10 spike : > First they laugh at you.? Then they argue.? Then they write Foxnews > articles: > > > > http://www.foxnews.com/scitech/2011/01/10/space-volunteer-way-mission-mars/?test=latestnews > > > > Note in the archives I concluded a trip to Mars would be a one-way, this at > least a dozen years ago.? I also suggested at the time that there would be > *plenty* of volunteers, no shortage. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From lubkin at unreasonable.com Sun Jan 9 03:23:42 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sat, 08 Jan 2011 22:23:42 -0500 Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona In-Reply-To: <002401cbaf86$7e92a7d0$7bb7f770$@att.net> References: <002401cbaf86$7e92a7d0$7bb7f770$@att.net> Message-ID: <201101111734.p0BHY1P9025611@andromeda.ziaspace.com> Spike wrote: >Best wishes for a full recovery to Rep. Giffords and the other >injured bystanders. While this shouldn't have happened to anyone, and Giffords seems like quite a decent person, I've had the tv coverage on non-stop since it happened. I think that since her husband is shuttle commander and her brother-in-law is current ISS commander, it feels like family. The latest detail is surreal: KOLD-TV (Tucson) is reporting that the little girl that was killed, Christina Taylor Green, was born on 9/11/2001, and was featured in the book Faces of Hope: Babies Born on 9/11. -- David. From bbenzai at yahoo.com Tue Jan 11 19:04:54 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 11 Jan 2011 11:04:54 -0800 (PST) Subject: [ExI] one way trip to mars In-Reply-To: Message-ID: <818907.16146.qm@web114402.mail.gq1.yahoo.com> Eugen Leitl wrote: > > Now what this solar system needs is a mesh network of DTN > routers. > > A router in every orbit, a robot on every planet. Yes, indeed. Not only would that create useful infrastructure for exploiting the resources of the solar system, it would also provide the means for rapidly colonising it, once uploading is perfected, without competing with those who prefer to remain in meat. Vastly easier, cheaper and faster to send a few gigabytes of information into space than a few kilos of meat. This makes me wonder about the possibility of 'personality compression'. It occurred to me that there's no need to transmit all the information about your preferred body, for example, provided that standard body designs were available all over the place. You could just encode your preferred body type (a catalogue index number, basically) plus some parameters to vary it as you wish. Then I thought the same kind of method could probably be applied to the mind. Most minds will have a lot in common, just as all life has a lot in common (we are very similar, genetically, to pineapples, for example), so perhaps, given a sufficiently established infrastructure, you could travel as a set of mind-parameters, to be applied to a template when you arrive. It may be possible to compress a person down to megabytes, or even less, in this way. Every time a new mind-type was developed, a template for it would be distributed across all the servers in the solar system, and incorporated into a catalogue, ready for individuals who use that mind-type to use for quick, lightweight travel. Anyone with a unique mind-type who was unwilling to distribute its specifications would just have to put up with travelling as a much bigger file. Ben Zaiboc From hkeithhenson at gmail.com Tue Jan 11 18:05:00 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 11 Jan 2011 11:05:00 -0700 Subject: [ExI] Fwd: Conflict, was Tea party hypocrisy In-Reply-To: References: Message-ID: Forward from another list. On Mon, Jan 10, 2011 at 11:07 AM, Florin Popescu wrote: snip > What is relevant about political extremism, in the US and elsewhere, is the risk of global conflict spurred by an ever growing class of people with some means (time, transportation, external funding) but ever declining self-esteem and social mobility angst. That?s a discussion I?d be glad to follow, if we can stay ?professional? about it. 1. Humans are top predators. 2. The ultimate control on the number of top predators is other members of the same species.? Talk to lions about this. 3. Human numbers that stress the ability of the ecosystem (and now the economy) to sustain them result from reproduction in excess of the death rate.? (Obviously) 4. Humans are social, i.e., they normally do just about everything including killing each other in groups.? Spreading memes about the nastiness of some other group is the way they are synchronized to act in concert.? (I.e., go to war or related social disruptions.)? This amounts to a high gain group behavioral switch. 5. Humans are sensitive to *relative* changes.? So poor people who have historically been poor are much less likely to get into wars or revolts than people who have been doing well and the future prospects start looking bad. 6. Fighting other humans is a dangerous business.? Unless the ecosystem/economy is stressed or projected to be stressed and the alternative to war is worse, humans are strongly inhibited against war.? However, being attacked causes the attacked group to instantly switch into war mode.? (Pearl Harbor, 9/11, countless stone age attacks.) 6. While this model applies best to hunter gatherers, the effects of long term selection in the stone age applies to current humans. So (using this model) why the rise of the Tea Party? While the US GDP has been rising, the income has been concentrating in a small fraction of the population.? One of the results has been stagnant or declining wages for the bulk of what used to be the middle class.? A large fraction has been sliding into the lower class and the prospects for them and their children do not look good.? So it is understandable (in terms of this model) that this sub group of the US population with "ever declining self-esteem and social mobility angst" gets infested with radical memes and tends to support external wars as well.? (Shades of what happened in Germany about the time the Nazies rose to power.) What to do to repair this problem is obvious, put the bulk of the US population into an economic growth mode where the future looks bright. *How* to do this is another question entirely. Keith Henson From algaenymph at gmail.com Tue Jan 11 20:14:33 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Tue, 11 Jan 2011 12:14:33 -0800 Subject: [ExI] Reframing transhumanism as good vs. evil Message-ID: <4D2CBA29.6030008@gmail.com> A while age in college, I was in a speech class where one of the things we did was have a group critique our ideas. I had an idea to speak about transhumanism, to which one of my classmates rather indignantly asked me why I wanted to advocate biotech enhancement instead of medicine. That's the problem we have. Even when we're not seen as evil, we're seen as selfish nerds who are utterly indifferent to it. The sad thing is I find myself almost believing this. Causes that comedians can't brand as outright evil or obvious spin are pretty much about fighting evil and/or saving innocents. Citizen heroics, basically. What kind of citizen heroics do *we* have? From spike66 at att.net Wed Jan 12 02:59:47 2011 From: spike66 at att.net (spike) Date: Tue, 11 Jan 2011 18:59:47 -0800 Subject: [ExI] test Message-ID: <002d01cbb204$c64a9120$52dfb360$@att.net> Do not undisregard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 12 05:45:44 2011 From: spike66 at att.net (spike) Date: Tue, 11 Jan 2011 21:45:44 -0800 Subject: [ExI] mass transit again In-Reply-To: References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> Message-ID: <007101cbb21b$f4f948b0$deebda10$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Grigg Subject: Re: [ExI] mass transit again >...Spike, I ride the city bus on a regular basis and despite once getting into a heated exchange with one unpleasant & unbalanced fellow, I normally find lots of decent people (human beings, not "proles") who simply want to get to their destination without any criminal drama. And every time I see a young male give up his seat to an elderly person or someone who is handicapped, I feel there is hope for us all! : ) Johnny, there is hope for all of us. First of all your comment human beings, not proles. All proles are human beings. Furthermore, they are free. Do refer to Orwell's 1984, and if you haven't read it, log off the computer right now, and get thee to the library forthwith, borrow it and READ EVERY WORD sir, every single word. Regarding young males giving up seats to little old ladies, it is so easy for me to envision you doing that. Nay understatement, it is difficult for me to envision your not doing that, even if the little old lady is 27. Perhaps *especially* if she is 27. Why? Because you radiate nice. I admire that in you pal. >...Do you really want to live in an "Oath of Fealty" kind of world? I realize that was not your point, but it is where you are headed in your line of thinking...Best wishes, John It isn't what I want, but rather where I see us going. The shooting this weekend perhaps is contributing to my gloomy outlook on humanity, and how it is being played in the media. We see political pundits desperate to score points on a tragedy, when I see no evidence this was anything other than an ordinary crazy person, of which we have tragically many. What I want to do is figure out a way to do mass transit which promotes security, without requiring the hiring of more cops. spike From spike66 at att.net Wed Jan 12 06:15:35 2011 From: spike66 at att.net (spike) Date: Tue, 11 Jan 2011 22:15:35 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <504077.46907.qm@web114418.mail.gq1.yahoo.com> References: <504077.46907.qm@web114418.mail.gq1.yahoo.com> Message-ID: <007301cbb220$208b5190$61a1f4b0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Ben Zaiboc ... > > Stathis I think this is an understatement.? I > would say *most* theologians, particularly the fundamentalist variety, > discount the supernatural.? I might be projecting, but I don't see how > they could miss that if they really study the hell out of the bible. ... >...Wait, what? If they are fundamentalist theologians, how can they discount the supernatural??... Ja, you read correctly and I stand by it. Reasoning: what if you major in theology, become a really sharp bible scholar, well respected, scholarships offered and accepted, now you have your ThD, and they want you to teach. You have *no* other job skills, you like teaching the bible, so off you go. But if you read that bible carefully, learn the languages, do everything one must do to earn the ThD, I claim there is no reasonable way one could still believe the way the typical fundamentalist pew sitter does. Consequently, in schools of theology, the most common complaint is that professors destroy the students' faith in god. >...I thought all theologians started with the assumption that a god exists?? (i.e. the biggest supernatural proposition of all)... Ben I could be wrong, but I think I am right. They start out with that supposition, sure. But as has been known since the time of Solomon, "In much learning is great sorrow, in wisdom, suffering." I think that describes the process of studying scriptures until you realize these are the ideas of humans thinking about god, not the thoughts of god. That conclusion is completely unescapable. >...Of course, I don't really see how theologians stay theologians.? Studying the bible tends to get rid of not just hell, but heaven and god as well. Ben Zaiboc Ja, well said sir. If anyone can dismiss the notion of 73 virgins, yet take seriously the christian version of the same thing, floating on clouds and playing harps, then I do not understand that line of reasoning, I really do not. Furthermore, a theologian must at some point study the notion of evolution. Perhaps I project, but if anyone can really look at nature, just look at the natural world and miss that notion, I don't understand that person. spike From reasonerkevin at yahoo.com Wed Jan 12 06:22:23 2011 From: reasonerkevin at yahoo.com (Kevin Freels) Date: Tue, 11 Jan 2011 22:22:23 -0800 (PST) Subject: [ExI] Fwd: Conflict, was Tea party hypocrisy In-Reply-To: References: Message-ID: <730573.55421.qm@web81608.mail.mud.yahoo.com> I'm not so certain about number 3. It seems to me that a common problem in many developing nations is stagnant or declining population growth. The number of people in an economy is irrelevant as long as the money continues to move from person to person. If population were a problem, China would have crashed long ago. There are plenty of resources available and I'm pretty certain that the ecosystem can handle many more. Problems occur in the economy when money stops moving. Those who have the money hang onto it. The only real way to get it going is to get that money out of those who are holding it and into the hands of those who will spend it. You can do this a number of ways. The simplest is the redistribution of wealth which I know is not popular here. The money then is moved to those who spend it and it works it's way back up the chain in the form of profits to be redistributed again. Another way is to incur debt and spend that money. A third way is to provide an incentive for those who have the money to do something with it other than hold it. This means creating conditions where those holding the money can spend it in anticipation of a reward that is greater than had they held it. I'm sure there are other ways as well, but these are the first that come to mind. ________________________________ From: Keith Henson To: ExI chat list Sent: Tue, January 11, 2011 12:05:00 PM Subject: [ExI] Fwd: Conflict, was Tea party hypocrisy Forward from another list. On Mon, Jan 10, 2011 at 11:07 AM, Florin Popescu wrote: snip > What is relevant about political extremism, in the US and elsewhere, is the >risk of global conflict spurred by an ever growing class of people with some >means (time, transportation, external funding) but ever declining self-esteem >and social mobility angst. That?s a discussion I?d be glad to follow, if we can >stay ?professional? about it. 1. Humans are top predators. 2. The ultimate control on the number of top predators is other members of the same species. Talk to lions about this. 3. Human numbers that stress the ability of the ecosystem (and now the economy) to sustain them result from reproduction in excess of the death rate. (Obviously) 4. Humans are social, i.e., they normally do just about everything including killing each other in groups. Spreading memes about the nastiness of some other group is the way they are synchronized to act in concert. (I.e., go to war or related social disruptions.) This amounts to a high gain group behavioral switch. 5. Humans are sensitive to *relative* changes. So poor people who have historically been poor are much less likely to get into wars or revolts than people who have been doing well and the future prospects start looking bad. 6. Fighting other humans is a dangerous business. Unless the ecosystem/economy is stressed or projected to be stressed and the alternative to war is worse, humans are strongly inhibited against war. However, being attacked causes the attacked group to instantly switch into war mode. (Pearl Harbor, 9/11, countless stone age attacks.) 6. While this model applies best to hunter gatherers, the effects of long term selection in the stone age applies to current humans. So (using this model) why the rise of the Tea Party? While the US GDP has been rising, the income has been concentrating in a small fraction of the population. One of the results has been stagnant or declining wages for the bulk of what used to be the middle class. A large fraction has been sliding into the lower class and the prospects for them and their children do not look good. So it is understandable (in terms of this model) that this sub group of the US population with "ever declining self-esteem and social mobility angst" gets infested with radical memes and tends to support external wars as well. (Shades of what happened in Germany about the time the Nazies rose to power.) What to do to repair this problem is obvious, put the bulk of the US population into an economic growth mode where the future looks bright. *How* to do this is another question entirely. Keith Henson _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From reasonerkevin at yahoo.com Wed Jan 12 06:27:13 2011 From: reasonerkevin at yahoo.com (Kevin Freels) Date: Tue, 11 Jan 2011 22:27:13 -0800 (PST) Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <4D2CBA29.6030008@gmail.com> References: <4D2CBA29.6030008@gmail.com> Message-ID: <573893.51687.qm@web81602.mail.mud.yahoo.com> Citizen heroics? Selfish geeks? You have it all wrong. Over a hundred people die every minute. Is that not the most disgusting and horrid thing you can imagine considering that with the right effort we could do away with that entirely? We have a goal to end death and suffering. What could be more heroic? It's so damned heroic that many people have trouble grasping the idea. :-) ________________________________ From: AlgaeNymph To: ExI chat list ; Humanity+ Discussion List Sent: Tue, January 11, 2011 2:14:33 PM Subject: [ExI] Reframing transhumanism as good vs. evil A while age in college, I was in a speech class where one of the things we did was have a group critique our ideas. I had an idea to speak about transhumanism, to which one of my classmates rather indignantly asked me why I wanted to advocate biotech enhancement instead of medicine. That's the problem we have. Even when we're not seen as evil, we're seen as selfish nerds who are utterly indifferent to it. The sad thing is I find myself almost believing this. Causes that comedians can't brand as outright evil or obvious spin are pretty much about fighting evil and/or saving innocents. Citizen heroics, basically. What kind of citizen heroics do *we* have? _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 12 06:28:02 2011 From: spike66 at att.net (spike) Date: Tue, 11 Jan 2011 22:28:02 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <504077.46907.qm@web114418.mail.gq1.yahoo.com> Message-ID: <007401cbb221$ddd06c80$99714580$@att.net> > On Behalf Of Adrian Tymes Subject: Re: [ExI] Fw: Re: atheists declare religions as scams. On Sat, Jan 8, 2011 at 8:04 AM, Ben Zaiboc wrote: >> Wait, what? If they are fundamentalist theologians, how can they discount the supernatural? >Simple: they (try to) declare themselves outside the rules of logic. Or they are quiet disbelievers. When one is hired to teach theology, the school looks at the professor's degree, but do not quiz the professor on his beliefs. This became a huge issue in Seventh Day Adventism in 1980, when they tried to do exactly that: have the theologians sign up to a consensus statement of belief. That didn't work out. One does not need to be a believer to teach effectively. One does not need to be a believer to write inspiring books and articles. Yes I know it sounds contradictory. I ask you then: suppose I personally knew a way to write something inspirational. I know an inspiring story based on something that actually happened, which I could fictionalize to protect the identities, and it involves one who came thru a very trying time by faith in god. It really is a good story. But you know and I know I am a flaming atheist now. I could use a pseudonym. Is it ethical for me to write it? Would I be lying in a sense? I have been struggling with this question for years, and I am asking for advice here. Johnny? Adrian? Ben? Damien? Keith? Others? spike From atymes at gmail.com Wed Jan 12 01:42:19 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 11 Jan 2011 17:42:19 -0800 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <4D2CBA29.6030008@gmail.com> References: <4D2CBA29.6030008@gmail.com> Message-ID: 1) What is the boundary between "enhancement" and "medicine"? 1a) Does, say, curing cancer necessarily fall into only one of the two? 1b) What about prosthetics? 1c) What about prosthetics that exceed human baseline performance? 2) What part of "making life better for everyone (who wants a better life) and eliminating many of the root causes of evil (resource scarcity, fear of death, lack of understanding)" is not a long term and more complex form of "fighting evil"? 3) Is not part of the discomfort we cause, because we propose to do something about evils that most people have accepted as inevitable? That last one may be the most significant part. People make up all sorts of evil motives for us, but they rarely turn out to be true. They fear that, if we turn out to be right, they will have been in the wrong for opposing us - yet we pursue things so complex that most people do not think they can meaningfully contribute. How many people who read this message, for example, have ever actually worked in a nanotech fab, or have studied the practical skills needed to eventually work in one? Or how about just building your own robot (even a LEGO prototype) that can build something else, or done any DIY biotech experiments? The fraction of people who read this, and who have done one or more of the above, is surely far less than 1. It is also surely much much higher than the fraction of the general public. On Tue, Jan 11, 2011 at 12:14 PM, AlgaeNymph wrote: > A while age in college, I was in a speech class where one of the things we > did was have a group critique our ideas. ?I had an idea to speak about > transhumanism, to which one of my classmates rather indignantly asked me why > I wanted to advocate biotech enhancement instead of medicine. > > That's the problem we have. ?Even when we're not seen as evil, we're seen as > selfish nerds who are utterly indifferent to it. ?The sad thing is I find > myself almost believing this. ?Causes that comedians can't brand as outright > evil or obvious spin are pretty much about fighting evil and/or saving > innocents. ?Citizen heroics, basically. > > What kind of citizen heroics do *we* have? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From msd001 at gmail.com Wed Jan 12 00:40:54 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 11 Jan 2011 19:40:54 -0500 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <4D2CBA29.6030008@gmail.com> References: <4D2CBA29.6030008@gmail.com> Message-ID: On Tue, Jan 11, 2011 at 3:14 PM, AlgaeNymph wrote: > A while age in college, I was in a speech class where one of the things we > did was have a group critique our ideas. ?I had an idea to speak about > transhumanism, to which one of my classmates rather indignantly asked me why > I wanted to advocate biotech enhancement instead of medicine. > > That's the problem we have. ?Even when we're not seen as evil, we're seen as > selfish nerds who are utterly indifferent to it. ?The sad thing is I find > myself almost believing this. ?Causes that comedians can't brand as outright > evil or obvious spin are pretty much about fighting evil and/or saving > innocents. ?Citizen heroics, basically. > > What kind of citizen heroics do *we* have? How about not dieing of "natural causes" ? It may take another 50-100 years before anyone is old enough to sway the ignorant from the default belief that death is inevitable (soon it may only be taxes) I'm patient though... From atymes at gmail.com Wed Jan 12 07:34:16 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 11 Jan 2011 23:34:16 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <007401cbb221$ddd06c80$99714580$@att.net> References: <504077.46907.qm@web114418.mail.gq1.yahoo.com> <007401cbb221$ddd06c80$99714580$@att.net> Message-ID: On Tue, Jan 11, 2011 at 10:28 PM, spike wrote: > I ask you then: suppose I personally knew a way to write something > inspirational. ?I know an inspiring story based on something that actually > happened, which I could fictionalize to protect the identities, and it > involves one who came thru a very trying time by faith in god. ?It really is > a good story. ?But you know and I know I am a flaming atheist now. ?I could > use a pseudonym. ?Is it ethical for me to write it? ?Would I be lying in a > sense? ?I have been struggling with this question for years, and I am asking > for advice here. Faith in god, like many things, can be used for good or ill. Just because we see how it is so often (arguably the majority of the time) used for ill, does not mean we must disavow that it can ever have purely beneficial results. You could thus write the story as is with no ethical conflicts...technically. Or, acknowledging that it is a story (and fictionalizing to protect identities might be a good idea regardless), you could alter the faith in god to, say, faith in humanity, or some other equivalent, depending on how exactly this faith helped out. From msd001 at gmail.com Wed Jan 12 00:54:57 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 11 Jan 2011 19:54:57 -0500 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: On Tue, Jan 11, 2011 at 3:42 AM, Adrian Tymes wrote: > Volunteers, yes. > > Qualified volunteers? ?Not as many. > > People who don't run a very high risk of messing things up and turning it into > a disaster, which could be foreseen way in advance? ?Per the article: > > "There will be tremendous public and political opposition from many members > of the public to a mission which can only end in death." Ironically I just responded that some of us might like to avoid death while the rest of the population feels it is inevitable. In this case the argument is over the duration of a reasonable period of life. Seriously, why is it acceptable to fight over the "right" to live X amount of time longer than a proposed mission to Mars (or wherever) but any amount of time References: <002d01cbb204$c64a9120$52dfb360$@att.net> Message-ID: I see your test and I raise you one. Max 2011/1/11 spike > Do not undisregard. > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilsa.bartlett at gmail.com Wed Jan 12 03:23:50 2011 From: ilsa.bartlett at gmail.com (ilsa) Date: Tue, 11 Jan 2011 19:23:50 -0800 Subject: [ExI] test In-Reply-To: <002d01cbb204$c64a9120$52dfb360$@att.net> References: <002d01cbb204$c64a9120$52dfb360$@att.net> Message-ID: thank you... I suppose that do not undisregard means reply. Ilsa Bartlett Institute for Rewiring the System 2951 Derby Street #139 Berkeley, CA 94705 510-423-3132 http://www.google.com/profiles/ilsa.bartlett www.hotlux.com/angel.htm www.grassroutesguides.com "Don't ever get so big or important that you can not hear and listen to every other person." -John Coltrane 2011/1/11 spike > Do not undisregard. > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jan 11 16:39:12 2011 From: pharos at gmail.com (BillK) Date: Tue, 11 Jan 2011 16:39:12 +0000 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: On Tue, Jan 11, 2011 at 8:42 AM, Adrian Tymes wrote: > Volunteers, yes. > Qualified volunteers? ?Not as many. > > People who don't run a very high risk of messing things up and turning it into > a disaster, which could be foreseen way in advance? ?Per the article: > > "There will be tremendous public and political opposition from many members > of the public to a mission which can only end in death." > > In so far as the mission takes any degree of public (government) resources, > which a near future mission to Mars is likely to do, it is the public's and > politicians' place to object. > > Remember what happened when the UK sent criminals to Australia. You wouldn't want that to happen again now, would you? BillK ;) From avantguardian2020 at yahoo.com Wed Jan 12 09:14:04 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 12 Jan 2011 01:14:04 -0800 (PST) Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: <866300.71659.qm@web65602.mail.ac4.yahoo.com> ----- Original Message ---- > From: Mike Dougherty > To: ExI chat list > Sent: Tue, January 11, 2011 4:54:57 PM > Subject: Re: [ExI] one way trip to mars > Is an 80 year life a "good long time" or will we feel cheated during > year 79 that the end is approaching?? What if the mission to mars is > expected to last 80 years? (but guaranteed to last no longer)? What > about half that? In the face of what Stephen Gould calls "deep time" that is the aeons, epochs, and eras of geologic time, what is the difference between?a life of 10 years or a 100 years??In the grand scheme of things?all things are temporary and the stars are but candles in the wind. That being said, who in their right mind would not?gladly volunteer for?a such an opportunity? If I were the only human to ever step foot on mars and my mission a total failure otherwise, I would die fulfilled beyond the reckoning of all those doomed to be forgeotten even by their own descendants. I am actually that shocked so few volunteered. It just goes to show how cowardly we have become compared to our ancestors of just a generation or two prior. Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower?? From stathisp at gmail.com Sun Jan 9 13:49:36 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 10 Jan 2011 00:49:36 +1100 Subject: [ExI] Fw: Re: simulation as an improvement over reality In-Reply-To: <715646.12049.qm@web114420.mail.gq1.yahoo.com> References: <715646.12049.qm@web114420.mail.gq1.yahoo.com> Message-ID: On Sun, Jan 9, 2011 at 3:00 AM, Ben Zaiboc wrote: >> > > Your argument is the same >> as the old one about a person who, after being copied, >> should be quite happy to shoot himself. ?Naturally that is >> silly. ?Nobody would be happy to shoot themselves, >> regardless of how many identical copies of them were in >> existence. >> > >> > Unless the identical copies are in lockstep and >> shooting oneself will >> > leave at least one of the copies running. In that >> case, the stream of >> > consciousness of the terminated copies would continue >> uninterrupted. >> >> > > > ?Weeeelll... > > ?If one of a set of copies in lock-step shoots itself, so do > ?all the others! In lockstep up to the point where a copy is terminated. You naturally have two brains running almost in lockstep in your head connected by a thick cable, the corpus callosum, and if one is damaged or destroyed, your consciousness continues. -- Stathis Papaioannou From alfio.puglisi at gmail.com Wed Jan 12 09:46:46 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 12 Jan 2011 10:46:46 +0100 Subject: [ExI] mass transit again In-Reply-To: <006c01cbaedc$41a60ab0$c4f22010$@att.net> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> Message-ID: 2011/1/8 spike > This is an example of what I mentioned a few days ago about being a way > bigger threat to society than is global warming, this bigger threat is feral > humans: > > > > > http://www.cnn.com/2011/CRIME/01/07/station.videotaped.incident/index.html?hpt=T2 > > > > Last week it was this: > > > > > http://www.bradenton.com/2010/12/27/2837235/manatee-sheriff-couple-attacked.html > > > > It is a reason why I think most public transit notions are a dead end. > Individual cars serve as suits of armor, providing a defensive barrier. > One can have a lot of legitimate reasons to prefer cars to public transit, but safety is not really one of them. Car accidents cause about 6 million injuries and 40,000 deaths per year in the US alone. People killed on mass transit is a minuscule fraction of that, even correcting for the number of travelers. It could be that I am overly focused on the feral humans thing. > It appears so :-) Avoiding TV and sensationalist newspapers often helps, as they paint a much more crime-filled picture than reality. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From sondre-list at bjellas.com Sun Jan 9 19:29:05 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Sun, 9 Jan 2011 20:29:05 +0100 Subject: [ExI] Morality In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: We should always try to save someone if we can, if you where at the scene of the big tsunami some few years back and you had the option to either save 100 people or 10 people, they you should and would save those 100 people. You did not cause the tsunami, and hence you are not responsible for the death of those 10 people you couldn't save. If you go into a store with the finger in your pants and tries to rob a store, and the store owner takes out a gun and shoots at you, and the bullet misses, hits the wall, fly off and kills an innocent pregnant women - who is charged with the crime? It's the guy who tried to rob the store, EVEN if he didn't have a gun and was just using his finger in the pants. He was the initiating cause of the event. In the moral scenario we are discussing, you are not responsible for the trolley starting rolling, you can decide either that 5 will die or 1 will die with the switch, so you hit the switch and one dies, not five. Being the direct cause of killing anyone, is morally very wrong and very, very bad. In a jury, that individual should be found guilty. No question about it, there are no way I can justify the killing of an innocent human being. Pride is an emotion, a feeling. If anyone (many do) build their whole moral beliefs on emotions, they are without any rational and logical framework for basing their moral. I don't mean that in a condescending manner towards you John... but I don't find pride as a virtue in the knowledge that anyone killed another human being. The society as a whole and governments can, and does, glorify certain actions, that lays foundations for the way we feel pride (self-respect, self-worth, etc.). I'm sure many Americans have a proud feeling about sending their sisters and brothers to fight in the wars in Iraq and Afghanistan. I see the same thing in Norway, we have troops in Afghanistan on "peace-keeping-mission". It's clear that people need to rationalize the act of waging war, nobody wants to know the truth. Nobody can handle the truth. http://www.youtube.com/watch?v=5j2F4VcBmeo - Sondre 2011/1/8 John Clark > On Jan 7, 2011, at 5:33 PM, Sondre Bjell?s wrote: > > People dies, that's a fact of life. > > > So if you can't save everybody then don't save anybody? > > Morality will improve our probability for survival and help us work well > together in a society. > > > Certainly not in this case! If I do what you recommend, if I do what you > say is the moral action it will cause more death and misery than if I do the > immoral thing. So I am immoral and proud of it. > > you can't morally justify the killing of a fat guy to save five other > people > > > My gut very strongly tells me that too, but my brain tells me that it is > justified to kill one guy to save 5 other people, and if that's not moral > then to hell with morality. In a real situation I don't know if I would have > the strength to resist my gut instinct, but I do know that if I was on a > jury judging somebody who had I would vote not guilty. > > John K Clark > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jan 9 13:37:36 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 10 Jan 2011 00:37:36 +1100 Subject: [ExI] mass transit again In-Reply-To: <006c01cbaedc$41a60ab0$c4f22010$@att.net> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> Message-ID: 2011/1/8 spike : > This is an example of what I mentioned a few days ago about being a way > bigger threat to society than is global warming, this bigger threat is feral > humans: > > > > http://www.cnn.com/2011/CRIME/01/07/station.videotaped.incident/index.html?hpt=T2 > > > > Last week it was this: > > > > http://www.bradenton.com/2010/12/27/2837235/manatee-sheriff-couple-attacked.html > > > > It is a reason why I think most public transit notions are a dead end. > Individual cars serve as suits of armor, providing a defensive barrier.? The > challenge is to invent a form of public transit which somehow protects the > unarmed prole from other proles.? If we build the infrastructure correctly, > the ferals will devour each other, and the rest of us can go about our > business. In most places you are far more likely to die from a car accident than from being assaulted, and you are up to ten times more likely to die using a car than using public transport (but even more likely to die if you walk, cycle or use a motorbike). Car accidents are the leading cause of death and permanent disability in younger people, with disease taking over as you get older. In fact, ending dead or crippled from a car accident is so common that it is often not reported, while ending up dead or crippled as a result of an attack by a stranger is news. The result is that people fear assault but are almost indifferent to the far greater risk from traffic accidents. http://www.etsc.eu/oldsite/statoverv.pdf -- Stathis Papaioannou From msd001 at gmail.com Sun Jan 9 07:14:31 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 9 Jan 2011 02:14:31 -0500 Subject: [ExI] whoa! In-Reply-To: <20110108085454.GF16518@leitl.org> References: <004501cbaec9$bcddc0a0$369941e0$@att.net> <20110108085454.GF16518@leitl.org> Message-ID: On Sat, Jan 8, 2011 at 3:54 AM, Eugen Leitl wrote: > On Fri, Jan 07, 2011 at 04:37:56PM -0800, Ryan Rawson wrote: >> Coworker in Datacenter in SJC said shaking racks was not fun, but he >> wasnt injured. > > I felt a great disturbance in the Net, as if millions of heads > suddenly touched down on rotating platters, and were suddenly > silenced. I fear something terrible has happened. oh... I had a very different (disturbing) image of "heads on platters." From amon at doctrinezero.com Wed Jan 12 12:55:29 2011 From: amon at doctrinezero.com (Amon Zero) Date: Wed, 12 Jan 2011 12:55:29 +0000 Subject: [ExI] one way trip to mars Message-ID: > Then I thought the same kind of method could probably be applied to the mind. ?Most minds will have a lot in common, just as all life > has a lot in common (we are very similar, genetically, to pineapples, for example), so perhaps, given a sufficiently established > infrastructure, you could travel as a set of mind-parameters, to be applied to a template when you arrive. ?It may be possible to > compress a person down to megabytes, or even less, in this way. Hey Ben - I like this idea. Makes me think of terms like 'personality vectors' (which to me implies both vector mapping and vectors of transmission). My first thought is that there would still be some detail-rich personality "content" which probably couldn't easily be compressed or described in terms of parameters; namely memories. But I daresay memories contain a lot of redundant information too... recurring themes, images, & associations, for one. - A From hkeithhenson at gmail.com Wed Jan 12 15:07:28 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 12 Jan 2011 08:07:28 -0700 Subject: [ExI] amusing Message-ID: http://nextbigfuture.com/2011/01/china-brain-project-is-scheduled-to.html From algaenymph at gmail.com Wed Jan 12 15:47:10 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Wed, 12 Jan 2011 07:47:10 -0800 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: References: <4D2CBA29.6030008@gmail.com> Message-ID: <4D2DCCFE.6010609@gmail.com> On 1/11/11 5:42 PM, Adrian Tymes wrote: > 1) What is the boundary between "enhancement" and "medicine"? Medicine is considered an act of caring, unless it's professional and technological and icky corporate bad. Enhancement is considered cheating (steroids!), unless it involves Hard Work and /natural/ supplements. > 1a) Does, say, curing cancer necessarily fall into only one of the two? Medicine, but it's a rich white man's disease. What we should /really/ be doing is preventing disease caused by Unhealthy American Diets. > 1b) What about prosthetics? Medicine, because it's Restoring the Balance. > 1c) What about prosthetics that exceed human baseline performance? Permissible only if the enhancement is accidental. > 2) What part of "making life better for everyone (who wants a better life) and > eliminating many of the root causes of evil (resource scarcity, fear of death, > lack of understanding)" is not a long term and more complex form of "fighting > evil"? "But is it /really/ better?" The people who control the socially-accepted definition of morality feel that all you need is love and organic gardening. Wanting more is Consumerism, which is what the Corporations make you buy into! Instead, we should be Respecting the Earth and Doing Our Part in the Community. > 3) Is not part of the discomfort we cause, because we propose to do > something about evils that most people have accepted as inevitable? Oh, but you'll only improve quality of life for The Rich (who may as well be the aliens from They Live) and create a caste system. Also, without death, we'll have overpopulation and Rich People living forever. At this point, I expect you'll tell me that there's nothing I can do about such people and that I should just ignore them. How is that a good idea when they're not ignoring us while getting more people listening to them than we are? > That last one may be the most significant part. People make up all sorts of > evil motives for us, but they rarely turn out to be true. How do we convince the public otherwise? -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jan 12 15:56:09 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 12 Jan 2011 16:56:09 +0100 Subject: [ExI] amusing In-Reply-To: References: Message-ID: <20110112155609.GS16518@leitl.org> On Wed, Jan 12, 2011 at 08:07:28AM -0700, Keith Henson wrote: > http://nextbigfuture.com/2011/01/china-brain-project-is-scheduled-to.html Not a bad idea, actually. But you can stick up to 7 consumer Fermis into a single case (or 4, more comfortably) for quite a bit less than a Tesla box (yeah, the double float performance is crippled on consumer Fermis, but you don't need that). Add IB and an 8-port IB switch, and you've got a nice, cheap cluster there. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From algaenymph at gmail.com Wed Jan 12 15:47:11 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Wed, 12 Jan 2011 07:47:11 -0800 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <573893.51687.qm@web81602.mail.mud.yahoo.com> References: <4D2CBA29.6030008@gmail.com> <573893.51687.qm@web81602.mail.mud.yahoo.com> Message-ID: <4D2DCCFF.6020206@gmail.com> On 1/11/11 10:27 PM, Kevin Freels wrote: > Citizen heroics? Selfish geeks? You have it all wrong. Over a hundred > people die every minute. Is that not the most disgusting and horrid > thing you can imagine considering that with the right effort we could > do away with that entirely? On 1/11/11 4:40 PM, Mike Dougherty wrote: > How about not dieing of "natural causes?" Death is natural! Nature is good! I went to college in San Francisco. I grew up reading White Wolf. Luddite rants are /burned in my brain.../ > We have a goal to end death and suffering. What could be more heroic? > It's so damned heroic that many people have trouble grasping the idea. :-) Yep, we're hero(ine)s with bad publicity. :) Still, reframing death as an illness looks like a good place to start. We still need to find the villains who stand to profit from suppressing immortality. Now how do we similarly reframe other forms of enhancement? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alaneugenebrooks52 at yahoo.com Wed Jan 12 00:47:23 2011 From: alaneugenebrooks52 at yahoo.com (Alan Brooks) Date: Tue, 11 Jan 2011 16:47:23 -0800 (PST) Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona In-Reply-To: <201101111734.p0BHY1P9025611@andromeda.ziaspace.com> Message-ID: <491878.85726.qm@web46101.mail.sp1.yahoo.com> The revolving door "Justice' 'system' (near-chaos) is feral as well. All day I hear stories about probation, parole, community service, paying for classes. The government now charges miscreants for classes to 'correct' them. You know all this, but you might not realize it is a govt-business partnership. And it isn't about punishment, it is worse than that; govt and business simply do not care. If their own kin get in trouble with the law,?someone pulls?every string to get them off the hook, however they think everyone else deserves law & disorder. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alaneugenebrooks52 at yahoo.com Wed Jan 12 03:36:20 2011 From: alaneugenebrooks52 at yahoo.com (Alan Brooks) Date: Tue, 11 Jan 2011 19:36:20 -0800 (PST) Subject: [ExI] this clip frightening w/ sound on Message-ID: <801577.39531.qm@web46105.mail.sp1.yahoo.com> http://www.youtube.com/watch?v=oJlfAGC8G8w&feature=fvw -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Jan 12 17:24:24 2011 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Jan 2011 09:24:24 -0800 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: On Tue, Jan 11, 2011 at 4:54 PM, Mike Dougherty wrote: > Taking a page from England's past, are there any criminals willing to > exchange their life sentence on earth for a pioneering colonial life > in The New Americas (or New Australias) ? Difference is, one could - theoretically - return (to Europe, if not to England) from Australia. Merchants did it a lot. > Aside from these considerations, there is always marketing and > outright capitalism: ?get paid big bucks for the sake of your family > in exchange for completing useful work on Mars. ?You definitely won't > come back, but your family is practically guaranteed to a better life > than if you continue your current 9-5 routine. This is the exact argument used to recruit labor in the less developed regions of China/India/etc. Infamously, this is often used to recruit children into prostitution - their parents will never know. In other words: sure, you can get warm bodies that way. But mere warm bodies are not all you need - most things of interest on a Mars mission that you can do with unskilled labor, you can do better with teleoperated robots controlled by skilled labor. (The main thing you can't do that way - make babies - is, if isolated to itself, almost of "flags and footprints" value: symbolic, and part of the solution, but not by itself the thing you'd need to do to establish a truly viable, self-sustaining colony.) From atymes at gmail.com Wed Jan 12 17:34:24 2011 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Jan 2011 09:34:24 -0800 Subject: [ExI] amusing In-Reply-To: References: Message-ID: http://bluebrain.epfl.ch/ On Wed, Jan 12, 2011 at 7:07 AM, Keith Henson wrote: > http://nextbigfuture.com/2011/01/china-brain-project-is-scheduled-to.html > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From atymes at gmail.com Wed Jan 12 18:19:55 2011 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Jan 2011 10:19:55 -0800 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <4D2DCCFE.6010609@gmail.com> References: <4D2CBA29.6030008@gmail.com> <4D2DCCFE.6010609@gmail.com> Message-ID: 2011/1/12 AlgaeNymph : > "But is it really better?"? The people who control the socially-accepted > definition of morality feel that all you need is love and organic > gardening.? Wanting more is Consumerism, which is what the Corporations make > you buy into!? Instead, we should be Respecting the Earth and Doing Our Part > in the Community. And that is one of the "evils" we fight against. Wanting to cure death for everyone (perhaps the rich would benefit first, as often happens, but this does not prevent us from spreading it to everyone) is something that scares them, so they fight against our goal, making up facts and making all sorts of logical fallacies in their crusade against those who genuinely seek a better tomorrow. They insist that we will mess it up - but the only alternative to trying to do something about our problems is not doing anything about our problems, and punishing those who try. That attitude has demonstrable results in, for example, much of the Middle East: see what happens to teachers and modernists who so much as suggest that perhaps girls ought to receive the same education as boys. (For extra drama - if a slight logical fallacy, but again, the people you're debating often don't care about fallacies* - point out that the one who recently shot a US Representative was thinking this way.) "Luddite" is starting to become accepted as a term for those who would trash the generally accepted benefits of modern tech. * This itself, BTW, is one of the main problems. The opposition, by and large, has accepted the practice of making up likely-sounding but in fact false data, and pretends that fallacies do not invalidate their arguments. They've also convinced much of the public that this is valid. One tactic to counter them is to call them on this whenever they do it. The next time they claim that just because you're a transhumanist you must be evil, and therefore your words can be dismissed without consideration, try pointing out that they're defending a point of view that has promoted the slaughter of over a billion people (and decline to cite any sources, other than vague mumblings about crusades and current wars) while you're promoting a point of view that could remove most of the reasons for war. The next time they claim the benefits of enhancement can only ever accrue to The Rich, ask them how much The Rich paid them to say that, and then point out the number of "astroturf" organizations that The Rich routinely pay to pretend to be incensed, to trip up and slow down those who really do want to help The Non-Rich. When a lot of the audience forget why fallacies invalidate arguments, and your opposition tries to take advantage of that to discredit your side, try countering with an exaggerated example - which you then acknowledge as probably invalid, but only to the degree that your opposition's fallacy is also invalid - to remind the audience. (On a related note: I'd like to see someone counter one of these "sex predator" moral panics that result in "tougher" - but actually less effective - laws by pointing out, to those urging the passage of these laws, that one could take issue with how they looked at one's children, and thus convinced that they intend to rape your children, get them branded - and banned from much of society - for life, when all they actually did was look at that someone's kids. Oh, and of course the courts would have to take their kids away because they looked at someone else's kids, because of course a sex predator is presumptively unqualified to raise children. Twist the screws by asking them why they want to martyr their own children in order to pass a law that actually makes our children less safe, by making it harder for cops to find the real predators.) > Oh, but you'll only improve quality of life for The Rich (who may as well be > the aliens from They Live) and create a caste system. No more a caste system than already exists. > Also, without death, > we'll have overpopulation and Rich People living forever. Assuming a) this never spreads beyond rich people and b) we don't eventually get off of Earth. Many of the arguments against us boil down to, "X has never yet happened therefore X can never happen" - if they're even explicit about their assumption of the indefinite status quo. > At this point, I expect you'll tell me that there's nothing I can do about > such people and that I should just ignore them.? How is that a good idea > when they're not ignoring us while getting more people listening to them > than we are? You're right, someone has to focus on countering them in the media. From your original problem description, it sounds like you might wish to become that someone. > That last one may be the most significant part. People make up all sorts of > evil motives for us, but they rarely turn out to be true. > > How do we convince the public otherwise? That is the crux of your college assignment, no? From bbenzai at yahoo.com Wed Jan 12 20:24:11 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 12 Jan 2011 12:24:11 -0800 (PST) Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: Message-ID: <364107.88768.qm@web114419.mail.gq1.yahoo.com> AlgaeNymph asked: > > A while age in college, I was in a speech class where one > of the things > we did was have a group critique our ideas.? I had an > idea to speak > about transhumanism, to which one of my classmates rather > indignantly > asked me why I wanted to advocate biotech enhancement > instead of medicine. > > That's the problem we have.? Even when we're not seen > as evil, we're > seen as selfish nerds who are utterly indifferent to > it.? The sad thing > is I find myself almost believing this.? Causes that > comedians can't > brand as outright evil or obvious spin are pretty much > about fighting > evil and/or saving innocents.? Citizen heroics, > basically. > > What kind of citizen heroics do *we* have? You've got to be joking. What's more heroic than saving the lives of 100,000 people a day? Even superman is never shown doing that. Ben Zaiboc From bbenzai at yahoo.com Wed Jan 12 20:28:27 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 12 Jan 2011 12:28:27 -0800 (PST) Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: Message-ID: <485988.6615.qm@web114416.mail.gq1.yahoo.com> "spike" wrote: > I ask you then: suppose I personally knew a way to write > something > inspirational.? I know an inspiring story based on > something that actually > happened, which I could fictionalize to protect the > identities, and it > involves one who came thru a very trying time by faith in > god.? It really is > a good story.? But you know and I know I am a flaming > atheist now.? I could > use a pseudonym.? Is it ethical for me to write > it?? Would I be lying in a > sense?? I have been struggling with this question for > years, and I am asking > for advice here.? Johnny?? Adrian?? > Ben?? Damien?? Keith?? Others? Of course you wouldn't be lying, not if you know it's a true story. As for whether you *should* write it, that's another thing. There are pros and cons. One of the cons is providing fuel for the god-squad. I'm thinking, thought, that there must be a way to write it in such a way that you tell the truth of the story, yet make it plain that it was the person's own resources, not the existence of a supernatural being, that made the difference. People can do good, or remarkable things, while holding false beliefs, so imagine how much better they could do with true ones! Something along the lines of the Wizard of Oz, where the protagonists are all after something external, only to discover they had it inside them all along. This guy came through a very trying time by faith in god, but we know there isn't a god (almost certainly), so it was the belief, not the god, that was the important thing. If we believe in things, we can do wonders. If we believe in things that are actually true, rather than imagined, we can do even more. It's belief in yourself that's the most important thing. That sort of slant. Ben Zaiboc From bbenzai at yahoo.com Wed Jan 12 20:48:57 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 12 Jan 2011 12:48:57 -0800 (PST) Subject: [ExI] mass transit again In-Reply-To: Message-ID: <688002.47874.qm@web114413.mail.gq1.yahoo.com> Stathis Papaioannou wrote: > In most places you are far more likely to die from a car > accident than > from being assaulted, and you are up to ten times more > likely to die > using a car than using public transport (but even more > likely to die > if you walk, cycle or use a motorbike). Car accidents are > the leading > cause of death and permanent disability in younger people, > with > disease taking over as you get older. In fact, ending dead > or crippled > from a car accident is so common that it is often not > reported, while > ending up dead or crippled as a result of an attack by a > stranger is > news. The result is that people fear assault but are > almost > indifferent to the far greater risk from traffic > accidents. This doesn't constitute an argument in favour of public transport, though, it's an argument in favour of learning to drive properly. Comparing traffic accidents with assaults is not comparing like with anything like like. Nobody gets mugged as a result of their poor walking skills. I wouldn't be surprised if, once self-driving cars become the norm, driving (or being driven, rather) was much safer than public transport, even taking into account the fact that the public transport will be self-driving as well. Ben Zaiboc From natasha at natasha.cc Wed Jan 12 21:00:36 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 12 Jan 2011 16:00:36 -0500 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <364107.88768.qm@web114419.mail.gq1.yahoo.com> References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> Message-ID: <20110112160036.qr0w118sesowckws@webmail.natasha.cc> [? wrote the following] >> A while age in college, I was in a speech class where one >> of the things >> we did was have a group critique our ideas.? I had an >> idea to speak >> about transhumanism, to which one of my classmates rather >> indignantly >> asked me why I wanted to advocate biotech enhancement >> instead of medicine. >> >> That's the problem we have.? Even when we're not seen >> as evil, we're >> seen as selfish nerds who are utterly indifferent to >> it.? The sad thing >> is I find myself almost believing this.? Causes that >> comedians can't >> brand as outright evil or obvious spin are pretty much >> about fighting >> evil and/or saving innocents.? Citizen heroics, >> basically. >> >> What kind of citizen heroics do *we* have? Medicine is a very necessary component of human enhancement. The person asking you this question was not clear and you could have told him/her that one of the most beneficial aspects of enhancement is its ethical and courageous use of medicine to help cure people from dreaded diseass and tragic injuries. Natasha From spike66 at att.net Wed Jan 12 21:22:27 2011 From: spike66 at att.net (spike) Date: Wed, 12 Jan 2011 13:22:27 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <485988.6615.qm@web114416.mail.gq1.yahoo.com> References: <485988.6615.qm@web114416.mail.gq1.yahoo.com> Message-ID: <00a601cbb29e$d073fbb0$715bf310$@att.net> ... On Behalf Of Ben Zaiboc Subject: Re: [ExI] Fw: Re: atheists declare religions as scams. "spike" wrote: >> I ask you then: suppose I personally knew a way to write something >> inspirational...? But you know and I know I am a flaming atheist now...? Would I be lying in a sense?? I have been struggling >> with this question for years... >Of course you wouldn't be lying, not if you know it's a true story. As for whether you *should* write it, that's another thing. There are pros and cons. One of the cons is providing fuel for the god-squad...Ben Zaiboc Thanks Ben this gets me part of the way there. Do let me focus the question a bit. The action all takes place in the 1980 to 1983 timeframe, and it includes a character who likes to watch beasts of all kinds, and really thinks deeply about what he sees. He isn't the central character in the story, but he plays a big part, and I honestly don't see a reasonable way to write that character out of the story. The easiest way for me to write that is in first person thru the eyes of that character actually. That way I don't need to make up much. For me, fiction is damn hard, but just writing down what happened is easy, and I can do it in a mostly entertaining an authentic sounding way. One of the important branches in that story is this character hearing about evolution for the first time when in college (!) from Carl Sagan's Cosmos series, and how it rocked his world. If I ended the story in early 1983, or somehow wrote that character out of the story, it would be inspirational. I could leave that character searching searching searching, without resolving that line of conflict, a lost and wandering soul. The crowd that goes in for this kind of literature would love it. If I continue it until about 1985, that character becomes a flaming atheist, and those who read inspirational novels would hate it. Furthermore, I really don't want to either disturb or inspire the god squad. I don't want to! I don't even know what is the ethical thing to do, so to date I have taken the time-honored approach and did nothing. spike From stefano.vaj at gmail.com Wed Jan 12 22:13:20 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 12 Jan 2011 23:13:20 +0100 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <20110112160036.qr0w118sesowckws@webmail.natasha.cc> References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> <20110112160036.qr0w118sesowckws@webmail.natasha.cc> Message-ID: On 12 January 2011 22:00, wrote: > Medicine is a very necessary component of human enhancement. The person > asking you this question was not clear and you could have told him/her that > one of the most beneficial aspects of enhancement is its ethical and > courageous use of medicine to help cure people from dreaded diseass and > tragic injuries. > An additional, easy retort is that *medicine itself* has never been perfectly orthodox from a utiitarian POV nor "sustainable" by any means. At any given time, more human lives and suffering would have been spared by reallocating globally the resources devoted to medical research, and to actual day-by-day medicine for that matter, to some other end, such as feeding the hungry, increasing safety, etc. Yet it is part of very traditional medical ethics that, e.g., you do whatever you can for the patient at hand. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Jan 13 00:03:04 2011 From: pharos at gmail.com (BillK) Date: Thu, 13 Jan 2011 00:03:04 +0000 Subject: [ExI] mass transit again In-Reply-To: <688002.47874.qm@web114413.mail.gq1.yahoo.com> References: <688002.47874.qm@web114413.mail.gq1.yahoo.com> Message-ID: On Wed, Jan 12, 2011 at 8:48 PM, Ben Zaiboc wrote: > This doesn't constitute an argument in favour of public transport, though, > it's an argument in favour of learning to drive properly. > > Comparing traffic accidents with assaults is not comparing like with anything like like. >?Nobody gets mugged as a result of their poor walking skills. > > I wouldn't be surprised if, once self-driving cars become the norm, driving > (or being driven, rather) was much safer than public transport, even taking > into account the fact that the public transport will be self-driving as well. > > No, It's statistics. Even if you are the best driver in the world, some other idiot might drive into you. (Note. It's always the other driver who is the idiot). Or, your brakes might fail, or you might nod off for a moment, etc. Nobody is a perfect driver (present company excluded, of course). You will never get people to drive safely. It's too much fun to take risks. And if they consider themselves to be an expert driver, they take even more risks. Any reduction in road deaths and injuries are achieved by things like seat belts, crumple zones, banning drunk drivers, etc. *not* by better driving. On public transport, you can nod off, be drunk, be on the phone, etc. and you will still get safely to your destination. (99.9%). I doubt if self-driving cars will become widespread in the US. The US is too sue-happy. Look at what happened to Toyota when they had a dodgy brake pedal. Robot cars are just begging for class action lawsuits. BillK From msd001 at gmail.com Thu Jan 13 00:28:49 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 12 Jan 2011 19:28:49 -0500 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: On Wed, Jan 12, 2011 at 12:24 PM, Adrian Tymes wrote: > In other words: sure, you can get warm bodies that way. ?But mere > warm bodies are not all you need - most things of interest on a Mars > mission that you can do with unskilled labor, you can do better with > teleoperated robots controlled by skilled labor. ?(The main thing you > can't do that way - make babies - is, if isolated to itself, almost of > "flags and footprints" value: symbolic, and part of the solution, but > not by itself the thing you'd need to do to establish a truly viable, > self-sustaining colony.) So I'll put your name on the list (next to mine) that going to mars "in-person" is a silly idea. Send the machines. From pharos at gmail.com Thu Jan 13 00:39:53 2011 From: pharos at gmail.com (BillK) Date: Thu, 13 Jan 2011 00:39:53 +0000 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: On Thu, Jan 13, 2011 at 12:28 AM, Mike Dougherty wrote: > So I'll put your name on the list (next to mine) that going to Mars > "in-person" is a silly idea. ?Send the machines. > > Hey, I can think of lots of places on Earth that I would prefer to send a robot than go myself! And, considering airport security fondling, I've just increased the list. BillK From atymes at gmail.com Thu Jan 13 00:56:30 2011 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Jan 2011 16:56:30 -0800 Subject: [ExI] EteRNA Message-ID: http://eterna.cmu.edu A RNA design simulation. The main initial part of the game is, you have a certain shape of RNA (and thus, protein) that you want to make. You know where the base pairs are supposed to go, but which amino acids go where? The tutorials and simple challenges (at least, the ones I've seen so far) are at most a few hundreds of base pairs, usually less than one hundred, so the computer knows what each base pair combination folds into. Once you get past 10,000 points, a feature opens up where you can guess the pair arrangement for a more complex RNA. This gets synthesized in real life; the closest guesser wins a prize. So far, so good? Get this: there have already been strategy guides written for it (and it was just launched earlier this week). Imagine what happens when biochemistry students - who will professionally, whether in academia or industry post-graduation, be making proteins - get ahold of those as part of their course cirricula (to say nothing of being able to refer to them while on the job). Then imagine what happens once the DIY biotech types get wind of this as a training tool. Granted, it won't tell you what the proteins do. But for example, HIV is one of the challenges. What if they later extend it so you can draw your own RNA shape - say, a compliment to HIV, to act as a simple antibody - and pick that up as your own challenge? Take it with a grain of salt, of course. I am far from being the world's foremost expert in biochemistry. But the simulation is there, free for anyone to use right now. From spike66 at att.net Thu Jan 13 00:57:57 2011 From: spike66 at att.net (spike) Date: Wed, 12 Jan 2011 16:57:57 -0800 Subject: [ExI] mass transit again In-Reply-To: References: <688002.47874.qm@web114413.mail.gq1.yahoo.com> Message-ID: <000c01cbb2bc$eb281720$c1784560$@att.net> ... On Behalf Of BillK ... >...I doubt if self-driving cars will become widespread in the US. The US is too sue-happy. Look at what happened to Toyota when they had a dodgy brake pedal. Robot cars are just begging for class action lawsuits...BillK There is a roadblock that can be overcome, but only by having the self-driving feature as a customer aftermarket add on. I agree they will never come from the factory, but if the customer adds it and it gets in an accident, the customer is liable. I have a feature on my Detroit which is a backup warning. I have found it completely reliable to the extent that I can put my car in reverse and not even bother looking into the mirror. If that warning tone doesn't sound, there is nothing back there. Of course if it fails and I hit something, the cops don't care if I have one of those devices, it's still my fault. Even so, I have an easier time imagining an aftermarket add-on which automatically brakes if the driver fails to do so. The driver still must drive, but it takes on one more function. spike From atymes at gmail.com Thu Jan 13 01:26:41 2011 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 12 Jan 2011 17:26:41 -0800 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: On Wed, Jan 12, 2011 at 4:28 PM, Mike Dougherty wrote: > So I'll put your name on the list (next to mine) that going to mars > "in-person" is a silly idea. ?Send the machines. Until we can do a round trip, or safely establish habitation there? Yes. Both of those would be served well by colonizing Earth orbit first. Doesn't even have to be the Moon (in fact, may be simpler not to colonize the Moon first). From avantguardian2020 at yahoo.com Thu Jan 13 03:33:17 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 12 Jan 2011 19:33:17 -0800 (PST) Subject: [ExI] atheists declare religions as scams. In-Reply-To: <4D270971.3060505@aleph.se> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> Message-ID: <749674.88160.qm@web65603.mail.ac4.yahoo.com> ----- Original Message ---- > From: Anders Sandberg > To: ExI chat list > Sent: Fri, January 7, 2011 4:39:13 AM > Subject: Re: [ExI] atheists declare religions as scams. > > On 2011-01-07 09:19, Sondre Bjell?s wrote: > > No religion are sane. Religions are invalid as basis for morality, as > > the morality in all religions are not based upon realities in the world > > and doesn't stand up to scientific scrutiny. > > While the epistemic basis for religions is clearly bad, I doubt there is much >science itself can say about the correctness of morality. Science could measure the efficacy of moral systems by observing, analyzing,?and comparing various?indicator statistics?of?societies, cultures,?and subcultures that espouse those moral systems. The?main problem with this approach is that these measurements would have to be conducted after the fact since that is?a limitation of the empirical method in that all predictive science?is extrapolatory.?Minor problems with the scientific approach would include the difficulty in choosing a set of indicators to compare?that is both?meaningful and readily measureable. For example?possible measures might include?average wealth, gini index, crime rate,?happiness surveys, evolutionary stability,?and others?but?is any?combination of these measures?sufficiently representative of "goodness"? I guess what I am saying is that if you can pin down a definition of "good", then science can measure it for you. Approaching this problem from?a background as a biologist, I would say that the most important measure is evolutionary stability. A society of altruistic saints and martyrs might be philosophically blameless but probably would not survive very long in the real world. ? > If you are a moral realist (moral claims can be true or false), it is not >obvious that the truth of moral statements can be investigated through a >scientific experiment. How do you measure the appropriateness of an action? Scientific investigation of the appropriateness of an action would entail having a test group perform said action?many times and then look at the statistical distribution of outcomes.?If there is a lack of sufficient historical data regarding agents that have performed the action, one?would need to set up an experiment involving real or simulated moral agents?divided into groups that either perform or do not perform said action and then compare their outcomes. ?? > How do you test if utilitarianism is correct? Philosophically speaking one would ask oneself if one agreed with the premises of utilitarianism and then check to make sure the conclusions follow logically from those premises. That is the maximum extent of certainty one can have in *any* axiomatic system of knowledge due to considerations such as the M?nchhausen Trilemma and G?del's Incompleteness Theorems. On the other hand, to empirically test utilitarianism one has merely to?set up an experiment involving two desert islands.?Populate one desert island with a group of utilitarians and another island with a control group of non-utilitarians and then compare outcomes. Alternatively one could program and play Simcity: the?Utilitarian Edition and see what happens.? >And if you are a moral noncognitivist (moral claims are not true or false, but >like attitudes or emotions) or error theorist (moral claims are erroneous like >religion) at most you can collect statistics and correlates of why people >believe certain things. If you are a subjectivist (moral claims are about >subjective human mental states; they may or may not be relative to the speaker >or their culture) you might be able to investigate them somewhat, with the usual >messiness of soft science. All these philosophical categories only confounds the central question: does the given moral system succeed in its objectives??Note that what one thinks about the moral system?can be?quite independent of whether?the morals themselves?work or not. ? > Note that logic and philosophy can say a lot about the consistency of moral >systems: it is pretty easy to show how many moral systems are self-contradictory >or produce outcomes their proponents don't want, and it is sometimes even >possible to prove more general theorems that show that certain approaches are in >trouble (e.g. see http://sciencethatmatters.com/archives/38 ) Philosophy has >been doing this for ages, to the minor annoyance of believers. This is the unavoidable result of moralities?being axiomatic systems. Starting from moral axioms?such as?"thou shall not murder" and "thou shall?honor thy father", it is quite easy to come up with morally undecidable situations such as?"should thou honor thy father if he is a murderer?" This is G?del's incompleteness in action. So a moral system can never be?both complete and consistent from a philosophical standpoint. But despite this?unavoidable flaw, a system of morals could lead to the evolutionary success of those who adhere to it from an empirical standpoint. ?? > Science is really good at undermining factually wrong claims (like the Earth >being flat or that prayer has measurable positive effects on the weather). It >might also be possible to use it to say things about properties of moral systems >such as their computational complexity, evolutionary stability or how they tie >in with the cognitive neuroscience and society of their believers. It is just >that science is pretty bad at proving anything about the *correctness* of moral >statements unless it is supplemented by a theory of what counts as correct, and >that tends to come from the philosophy department (or, worse, the theology >department...) If a system of morality does not promote the survival of those that subscribe to it or worse yet undermines it (think?Shakers) then *correctness* is an irrelevant?indicator of?moral efficacy. When they are in contradiction, reality trumps conviction.? With regard to myself, for over a decade now,?I have subscribed to a game-theoretical view of morality. In fact for its simplicity tit-for-tat is IMHO a perfectly valid and efficacious moral code.?If you think about it, it subsumes both?an-eye-for-an-eye and the Golden Rule into its pithy succinctness.? Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From darren.greer3 at gmail.com Thu Jan 13 03:48:01 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 12 Jan 2011 23:48:01 -0400 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <364107.88768.qm@web114419.mail.gq1.yahoo.com> References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> Message-ID: I had an > idea to speak > about transhumanism, to which one of my classmates rather > indignantly > asked me why I wanted to advocate biotech enhancement > instead of medicine. That kind of question irks me slightly: it makes the mistake of assuming popular modalities in the discipline *are* the discipline. Since when is biotech not medicine? Transport back a five hundred years and you'd have a student asking a doctor with some new approaches why he'd rather perform surgery than bleed the patient with leeches and gauge his dominant "humor." Darren On Wed, Jan 12, 2011 at 4:24 PM, Ben Zaiboc wrote: > AlgaeNymph asked: > > > > > > A while age in college, I was in a speech class where one > > of the things > > we did was have a group critique our ideas. I had an > > idea to speak > > about transhumanism, to which one of my classmates rather > > indignantly > > asked me why I wanted to advocate biotech enhancement > > instead of medicine. > > > > That's the problem we have. Even when we're not seen > > as evil, we're > > seen as selfish nerds who are utterly indifferent to > > it. The sad thing > > is I find myself almost believing this. Causes that > > comedians can't > > brand as outright evil or obvious spin are pretty much > > about fighting > > evil and/or saving innocents. Citizen heroics, > > basically. > > > > What kind of citizen heroics do *we* have? > > > You've got to be joking. > > What's more heroic than saving the lives of 100,000 people a day? > > Even superman is never shown doing that. > > > Ben Zaiboc > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Jan 13 06:22:15 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 12 Jan 2011 23:22:15 -0700 Subject: [ExI] "Feral" humans, NOT Message-ID: If you read the descriptions of Jared Lee Loughner's behavior and personality changes in the context of this article: http://discovermagazine.com/2010/jun/03-the-insanity-virus There is a very good chance he was descending into his first episode of schizophrenia at the time he did the shootings. If the article has the cause of schizophrenia right, that it is the result of a reactivated a 60 million year old retrovirus integrated into our genome, then perhaps as a society we could better deal with it. If schizophrenia is the result of a reactivated infection of HERV-W, then we no longer have to think of the person who has schizophrenia as being at fault any more than a person who has TB or cancer. And perhaps we can be more observant, especially if people can be tested for antibodies to the virus. You see a friend start acting weird and withdrawing, maybe they should be tested and treated because it there might be a cause for the odd behavior that they have no control over. It's possible that a high level of HERV-W should be reason to keep people from buying guns. Interesting possibility. Keith From possiblepaths2050 at gmail.com Thu Jan 13 06:26:56 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 12 Jan 2011 23:26:56 -0700 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> Message-ID: Ben, bellow forth with your best rendition of a mad scientist/supervillain laugh, and tell your classmate that you and all the other evil transhumanists are only getting started with the master plan to exploit the weak and conquer the world. "You have seen nothing yet!!!" John ; ) On 1/12/11, Darren Greer wrote: > I had an >> idea to speak >> about transhumanism, to which one of my classmates rather >> indignantly >> asked me why I wanted to advocate biotech enhancement >> instead of medicine. > > That kind of question irks me slightly: it makes the mistake of assuming > popular modalities in the discipline *are* the discipline. Since when is > biotech not medicine? Transport back a five hundred years and you'd have a > student asking a doctor with some new approaches why he'd rather perform > surgery than bleed the patient with leeches and gauge his dominant "humor." > > Darren > > On Wed, Jan 12, 2011 at 4:24 PM, Ben Zaiboc wrote: > >> AlgaeNymph asked: >> >> >> > >> > A while age in college, I was in a speech class where one >> > of the things >> > we did was have a group critique our ideas. I had an >> > idea to speak >> > about transhumanism, to which one of my classmates rather >> > indignantly >> > asked me why I wanted to advocate biotech enhancement >> > instead of medicine. >> > >> > That's the problem we have. Even when we're not seen >> > as evil, we're >> > seen as selfish nerds who are utterly indifferent to >> > it. The sad thing >> > is I find myself almost believing this. Causes that >> > comedians can't >> > brand as outright evil or obvious spin are pretty much >> > about fighting >> > evil and/or saving innocents. Citizen heroics, >> > basically. >> > >> > What kind of citizen heroics do *we* have? >> >> >> You've got to be joking. >> >> What's more heroic than saving the lives of 100,000 people a day? >> >> Even superman is never shown doing that. >> >> >> Ben Zaiboc >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > *"It's supposed to be hard. If it wasn't hard everyone would do it. The > 'hard' is what makes it great."* > * > * > *--A League of Their Own > * > From eugen at leitl.org Thu Jan 13 07:47:27 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 13 Jan 2011 08:47:27 +0100 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: <20110113074727.GB16518@leitl.org> On Thu, Jan 13, 2011 at 12:39:53AM +0000, BillK wrote: > And, considering airport security fondling, I've just increased the list. Harassments will continue, until subjects will voluntarily remain in their cages. From anders at aleph.se Thu Jan 13 09:34:05 2011 From: anders at aleph.se (Anders Sandberg) Date: Thu, 13 Jan 2011 10:34:05 +0100 Subject: [ExI] Mass transit Message-ID: <4D2EC70D.2020304@aleph.se> On 2011-01-08 03:32, spike wrote: > This is an example of what I mentioned a few days ago about being a way > bigger threat to society than is global warming, this bigger threat is > feral humans ... > It is a reason why I think most public transit notions are a dead end. > Individual cars serve as suits of armor, providing a defensive barrier. They also serve as propelled weapons, very good at hurting squishy humans. Of course, sometimes it nice to have defensive barriers against other mammals: http://hn.se/nyheter/halland/1.1078155-skogens-drottning-tog-ett-skutt?articleRenderMode=image&image=bigTop (as Neatorama put it, an elk jumping over a Volvo, that is Sweden in a nutshell) The problem is feral humans, not mass transit. After all, there are ferals in supermarkets, schools and offices too. If ferals are such a problem that they impair the utility of mass transit, then we should probably consider finding ways of preventing too large uncontrolled groups in buildings too. > In the long run I am thinking something like the old tech ski lift > technology is the way to move proles in the big city: moving cable, with > the option of riding alone in an individual car or with as many as four > riders per car. I don?t know the exact mechanism, but it should be at > least possible to create a car that individually transfers from one > moving cable to another, so that one need only enter the coordinates of > the city block where one wants to end, and the rest is mechanized. This > does away with most parking lots, or rather moves them out where there > is plenty of room and the theoretical possibility of making the parking > lots safe. It could also be a light rail or automated wheeled vehicle infrastructure, perhaps underground. Which reminds me of a question I have been thinking about for a while: how much cheaper could tunneling become if we had either decent robotics, or decent nanotechnology? Once tunneling becomes cheap, a lot of systems can be buried and we can retrofit many current cities in cool ways. There are interesting issues of economies of scale for mass transit. Depending on how the cost function looks you get completely different topologies. If it is expensive to make, then you get a tree structure (think many subways). If it is really cheap you can make it a grid (think roads), which in the limit approaches arbitrary point-to-point connectivity (think Internet). Population density acts as local scaling for the density of branches; one can probably do a conformal map based on it to the plane, do a layout and then transform back for a first approximation of the ideal grid. In fact, once can likely weigh in a lot of cost functions into this mapping. (The big headache is always funding - plenty of free rider problems, public goods problems, principal agent problems and NIMBY. But the math is neat!) > It could be that I am overly focused on the feral humans thing. Damien?s > book Transcension has Dr. Malik being slain by ferals in a most > memorable and disturbing passage. Fortunately he was frozen and we > eventually saved by that technology. I propose avoiding all possible > contact with them. Typically, bad people are a small fraction p of all people (and I think Pinker is right that it has decreased historically). But when you get a large group together the probability of having at least one grows as 1-(1-p)^N, approaching 1 as N is large (for small N it is ~pN). On the other hand, if the bad person does something bad against one other person in the group, the chance of you being hit is 1/N. So the total risk to you behaves approximately as Np/N=p - the risk is constant whether you are in an elevator with a stranger, or at a mass rally. Some might worry about ferals doing attacks on larger groups, and see that as an argument for avoiding mass rallys (beside the other reasons). According to what I have read on the statistics of terrorism, we should expect the probability of X people being hurt to scale as CX^-a where a is ~1.7 in developed countries (http://arxiv.org/abs/physics?0502014). C is of course very very small. Other forms of violence or mass accidents are likely follows the same curve (as observed by Richardson). So the approximate risk of being hurt in a size X attack is pNX/N= pX, and the probability of that is pCX^(1-a). The mean for X is 22.66 (in fact, only 37% of terrorist attack injure or kill anybody) and the incidence is around 1.47 attacks per day anywhere in the world, so the empirical risk per day of being affected is 4.76*10-9. Tails matter here, since it is a very skew distribution - the small probability of very large attacks (where the above approximation is no longer valid) make the risk weakly dependent on N if one does a proper Bayesian calculation (I think), so there might be a mild reason to avoid very big crowds. However, the tiny value of pC makes this a very minor worry. I would be more worried about catching some illness from the crowd or getting hurt in traffic getting to it. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From anders at aleph.se Sat Jan 8 10:41:59 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 08 Jan 2011 11:41:59 +0100 Subject: [ExI] mass transit again In-Reply-To: <006c01cbaedc$41a60ab0$c4f22010$@att.net> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> Message-ID: <4D283F77.3050205@aleph.se> On 2011-01-08 03:32, spike wrote: > This is an example of what I mentioned a few days ago about being a way > bigger threat to society than is global warming, this bigger threat is > feral humans ... > It is a reason why I think most public transit notions are a dead end. > Individual cars serve as suits of armor, providing a defensive barrier. They also serve as propelled weapons, very good at hurting squishy humans. Of course, sometimes it nice to have defensive barriers against other mammals: http://hn.se/nyheter/halland/1.1078155-skogens-drottning-tog-ett-skutt?articleRenderMode=image&image=bigTop (as Neatorama put it, an elk jumping over a Volvo, that is Sweden in a nutshell) The problem is feral humans, not mass transit. After all, there are ferals in supermarkets, schools and offices too. If ferals are such a problem that they impair the utility of mass transit, then we should probably consider finding ways of preventing too large uncontrolled groups in buildings too. > In the long run I am thinking something like the old tech ski lift > technology is the way to move proles in the big city: moving cable, with > the option of riding alone in an individual car or with as many as four > riders per car. I don?t know the exact mechanism, but it should be at > least possible to create a car that individually transfers from one > moving cable to another, so that one need only enter the coordinates of > the city block where one wants to end, and the rest is mechanized. This > does away with most parking lots, or rather moves them out where there > is plenty of room and the theoretical possibility of making the parking > lots safe. It could also be a light rail or automated wheeled vehicle infrastructure, perhaps underground. Which reminds me of a question I have been thinking about for a while: how much cheaper could tunneling become if we had either decent robotics, or decent nanotechnology? Once tunneling becomes cheap, a lot of systems can be buried and we can retrofit many current cities in cool ways. There are interesting issues of economies of scale for mass transit. Depending on how the cost function looks you get completely different topologies. If it is expensive to make, then you get a tree structure (think many subways). If it is really cheap you can make it a grid (think roads), which in the limit approaches arbitrary point-to-point connectivity (think Internet). Population density acts as local scaling for the density of branches; one can probably do a conformal map based on it to the plane, do a layout and then transform back for a first approximation of the ideal grid. In fact, once can likely weigh in a lot of cost functions into this mapping. (The big headache is always funding - plenty of free rider problems, public goods problems, principal agent problems and NIMBY. But the math is neat!) > It could be that I am overly focused on the feral humans thing. Damien?s > book Transcension has Dr. Malik being slain by ferals in a most > memorable and disturbing passage. Fortunately he was frozen and we > eventually saved by that technology. I propose avoiding all possible > contact with them. Typically, bad people are a small fraction p of all people (and I think Pinker is right that it has decreased historically). But when you get a large group together the probability of having at least one grows as 1-(1-p)^N, approaching 1 as N is large (for small N it is ~pN). On the other hand, if the bad person does something bad against one other person in the group, the chance of you being hit is 1/N. So the total risk to you behaves approximately as Np/N=p - the risk is constant whether you are in an elevator with a stranger, or at a mass rally. Some might worry about ferals doing attacks on larger groups, and see that as an argument for avoiding mass rallys (beside the other reasons). According to what I have read on the statistics of terrorism, we should expect the probability of X people being hurt to scale as CX^-a where a is ~1.7 in developed countries (http://arxiv.org/abs/physics?0502014). C is of course very very small. Other forms of violence or mass accidents are likely follows the same curve (as observed by Richardson). So the approximate risk of being hurt in a size X attack is pNX/N= pX, and the probability of that is pCX^(1-a). The mean for X is 22.66 (in fact, only 37% of terrorist attack injure or kill anybody) and the incidence is around 1.47 attacks per day anywhere in the world, so the empirical risk per day of being affected is 4.76*10^-9. Tails matter here, since it is a very skew distribution - the small probability of very large attacks (where the above approximation is no longer valid) make the risk weakly dependent on N if one does a proper Bayesian calculation (I think), so there might be a mild reason to avoid very big crowds. However, the tiny value of pC makes this a very minor worry. I would be more worried about catching some illness from the crowd or getting hurt in traffic getting to it. -- Anders Sandberg Future of Humanity Institute Oxford University From pharos at gmail.com Thu Jan 13 09:58:41 2011 From: pharos at gmail.com (BillK) Date: Thu, 13 Jan 2011 09:58:41 +0000 Subject: [ExI] "Feral" humans, NOT In-Reply-To: References: Message-ID: On Thu, Jan 13, 2011 at 6:22 AM, Keith Henson wrote: > If schizophrenia is the result of a reactivated infection of HERV-W, > then we no longer have to think of the person who has schizophrenia as > being at fault any more than a person who has TB or cancer. > > And perhaps we can be more observant, especially if people can be > tested for antibodies to the virus. ?You see a friend start acting > weird and withdrawing, maybe they should be tested and treated because > it there might be a cause for the odd behavior that they have no > control over. > > It's possible that a high level of HERV-W should be reason to keep > people from buying guns. > > Yes, if someone acts weird (like not supporting the government) then they should be treated for dangerous behaviour. Millions of children in the US already get fed psychiatric drugs to treat behavioural problems. BillK From avantguardian2020 at yahoo.com Thu Jan 13 10:01:39 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 13 Jan 2011 02:01:39 -0800 (PST) Subject: [ExI] atheists declare religions as scams. In-Reply-To: <749674.88160.qm@web65603.mail.ac4.yahoo.com> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <4D270971.3060505@aleph.se> <749674.88160.qm@web65603.mail.ac4.yahoo.com> Message-ID: <671917.6101.qm@web65602.mail.ac4.yahoo.com> ----- Original Message ---- > From: The Avantguardian > To: ExI chat list > Sent: Wed, January 12, 2011 7:33:17 PM > Subject: Re: [ExI] atheists declare religions as scams. ? > This is the unavoidable result of moralities?being axiomatic systems. Starting > from moral axioms?such as?"thou shall not murder" and "thou shall?honor thy > father", it is quite easy to come up with morally undecidable situations such > as?"should thou honor thy father if he is a murderer?" *This is G?del's > incompleteness in action.* Actually, for purposes of accuracy, I will retract this statement as I doubt any moral system has sufficient arithmetic for Godel's theorem to be applicable. But it probably would become applicable were one to start talking about programming a robot or computer to be a "moral agent". For other purposes, it will suffice to say, *This is logical inconsistency in action* and leave it at that.? Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From anders at aleph.se Thu Jan 13 11:18:29 2011 From: anders at aleph.se (Anders Sandberg) Date: Thu, 13 Jan 2011 12:18:29 +0100 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <4D2CBA29.6030008@gmail.com> References: <4D2CBA29.6030008@gmail.com> Message-ID: <4D2EDF85.2030800@aleph.se> Here is my take on it: "Good and evil" tends to make discussions about morality stupid, but it is is important to think about what is good - what is desirable, what gives life and the world value, even what value is. A good way is to play the "why game": Why do we do medicine? To become healthy. But why are we striving for this? Health in itself is not valuable. But being ill is often directly painful and indirectly prevents us from doing many things. Health means not just an adequate bodily state, but the ability to pursue one's life projects (whatever they are). So a reason to strive for health is that it allows us to achieve well-being, both the direct state of feeling well but also potentially the well-being that comes from living a good life (whatever we happen to think that good is). Incidentally, remember the WHO definition of health as "a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity." - that is pretty transhuman and seems to promote enhancement as valid. Now, if the real good we are aiming at is well-being, then it becomes pointless to distinguish between therapy and enhancement. We might still have prioritarian concerns that the worst off deserve the most help or practical concerns that illness is easier to treat than mere normality. But both therapy and enhancement aim at the good. AlgaeNymph wrote: > > That's the problem we have. Even when we're not seen as evil, we're > seen as selfish nerds who are utterly indifferent to it. The sad > thing is I find myself almost believing this. Causes that comedians > can't brand as outright evil or obvious spin are pretty much about > fighting evil and/or saving innocents. Citizen heroics, basically. The problem is that there are relatively few clear-cut evils one can fight in an unambigious way. And that real attempts of making the world better often don't look very impressive (consider how most charity works - it is more about being seen as nice than actually achieving good outcomes. Utilitarian meta-charities like Giving What We Can look *weird* to most people - why focus on giving to Deworm The World (yuck!) when you can give to the local church charity?) I don't think there is anything wrong with others not viewing us as heroic. Few of us are. And I do think we should stand up for our right to be rationally selfish: I love life, and I will do my best to enjoy it for the longest possible in the best way I can. That includes helping my friends and strangers, and I certainly hope they also get lives they like. Many of the technologies we are discussing are not primarily developed for enhancement purposes (rather, enhancement is a side effect) and often enhancing technologies show "therapeutic" side effects. It is a mistake to think that if someone gets better everyone gets a bit worse off: the world is positive sum rather than negative. If we deworm Sub-Saharan Africa we are going to reap the benefit of many more brains that function well, helping both themselves, the region and the world. A good, safe and widespread cognitive enhancer would save lives (less accidents), speed up technological and economic development and no doubt enable many forms of human flourishing - even if you don't take it you might benefit from it. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From stathisp at gmail.com Thu Jan 13 13:19:19 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 14 Jan 2011 00:19:19 +1100 Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona In-Reply-To: <002401cbaf86$7e92a7d0$7bb7f770$@att.net> References: <002401cbaf86$7e92a7d0$7bb7f770$@att.net> Message-ID: 2011/1/9 spike : > This guy was seriously up-messed: > > > > http://www.youtube.com/profile?user=Classitup10&annotation_id=annotation_564778&feature=iv#p/u/4/E8Wr6AeZTCE > > > > {8-[ > > > > http://www.cnn.com/2011/CRIME/01/08/arizona.shooting/index.html?hpt=T1&iref=BN1 > > > > Best wishes for a full recovery to Rep. Giffords and the other injured > bystanders. He's psychotic, probably schizophrenic. He should have been treated. -- Stathis Papaioannou From pharos at gmail.com Thu Jan 13 14:56:45 2011 From: pharos at gmail.com (BillK) Date: Thu, 13 Jan 2011 14:56:45 +0000 Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona In-Reply-To: References: <002401cbaf86$7e92a7d0$7bb7f770$@att.net> Message-ID: On Thu, Jan 13, 2011 at 1:19 PM, Stathis Papaioannou wrote: > He's psychotic, probably schizophrenic. He should have been treated. > > This is America remember. How many dysfunctional unemployable psychotics do you think pay thousands of dollars in medical insurance premiums? See: BillK From jonkc at bellsouth.net Thu Jan 13 14:30:09 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 13 Jan 2011 09:30:09 -0500 Subject: [ExI] Da Capo Test In-Reply-To: <002d01cbb204$c64a9120$52dfb360$@att.net> References: <002d01cbb204$c64a9120$52dfb360$@att.net> Message-ID: On Jan 11, 2011, at 9:59 PM, spike wrote: > Do not undisregard. I couldn't fail to not undisregard you less. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at alice.it Thu Jan 13 15:50:27 2011 From: scerir at alice.it (scerir) Date: Thu, 13 Jan 2011 16:50:27 +0100 Subject: [ExI] Michael Nielsen on Singularity In-Reply-To: References: Message-ID: <10F076988BBF4FAD82910A1175509450@PCserafino> What should a reasonable person believe about the Singularity? http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/ From hkeithhenson at gmail.com Thu Jan 13 16:41:19 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 13 Jan 2011 09:41:19 -0700 Subject: [ExI] Mass transit Message-ID: On Thu, Jan 13, 2011 at 5:00 AM, Anders Sandberg wrote: snip > However, the tiny value of pC makes this a very minor worry. I would be > more worried about catching some illness from the crowd or getting hurt > in traffic getting to it. As usual, airtight reasoning from Anders. Thanks. Keith From rpwl at lightlink.com Thu Jan 13 16:59:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 13 Jan 2011 11:59:53 -0500 Subject: [ExI] Michael Nielsen on Singularity In-Reply-To: <10F076988BBF4FAD82910A1175509450@PCserafino> References: <10F076988BBF4FAD82910A1175509450@PCserafino> Message-ID: <4D2F2F89.7080809@lightlink.com> scerir wrote: > What should a reasonable person believe about the Singularity? > http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/ I posted the following reply to Michael Nielsen's essay: While I am sympathetic to the conclusion you state at the end, I am dismayed by the argument that got you there! The probability range is interesting, but not as interesting as the *uncertainty* in the values that went into it (Bayesian probabilities help you take account of prior probabilities, but they do not insulate you from the folly of putting down numbers that are derived from uncertain knowledge). If you were to factor in those uncertainties (assuming that you could, because that would be a huge task, fraught with difficulties having to do with the fundamental nature of probability), you might find that the real range was 0.0001% < range < 99% ?. in other words, "maybe it will happen, maybe it won't". I am afraid the real story has to do with understanding the nature of AGI research, not fiddling with probabilities. Messy. Empirical. Definitely something that would get a mathematician's hands dirty (as my one-time supervisor, John Taylor, put it when I was first thinking about getting into this field). But in the end, my own take (being up to my eyeballs in the aforesaid dirty work) is that the probability is "high" that it will happen "in the next 20 years". Richard Loosemore From rpwl at lightlink.com Thu Jan 13 17:32:39 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 13 Jan 2011 12:32:39 -0500 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: References: Message-ID: <4D2F3737.7070005@lightlink.com> Keith Henson wrote: > On Thu, Jan 13, 2011 at 5:00 AM, Anders Sandberg wrote: > > snip > >> However, the tiny value of pC makes this a very minor worry. I would be >> more worried about catching some illness from the crowd or getting hurt >> in traffic getting to it. > > As usual, airtight reasoning from Anders. Uh, not so fast. The computations were *too* airtight, since they involved only the (tractable) computations about the probability of being directly hurt. Terrorism (or feral actions, if you will) are often not designed to target the individuals they hurt directly, but to target the perceptions of the majority of society. That is a very different thing. A suitably planned series of terrorist attacks that (speaking probabilistically) would be very unlikely to affect anyone on this list, could nevertheless trigger a sequence of events in which, say, the United States reacted spasmodically and allowed the installation of an extreme right-wing government led by President Glenn Beck and Vice President Pat Robertson, which then decided to pre-emptively nuke Iran, which responded by unleashing one nuke at Israel and (who would have guessed they had one experimental ICBM, that in its first test would manage to limp its way around the globe?) that made it to the US and destroyed a large city. (And if anyone feels inclined to scoff at that scenario, don't forget that the attacks that occurred a little less than 10 years ago had the effect of changing U.S. perceptions enough to start two overseas wars.) So, please redo the calculations and include the probability of "side effects" such as these, which utterly dwarf the direct effects. (Hint: correct answer is that the probabilities cannot be computed in any meaningful way). Richard Loosemore From atymes at gmail.com Thu Jan 13 17:40:01 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 13 Jan 2011 09:40:01 -0800 Subject: [ExI] Michael Nielsen on Singularity In-Reply-To: <4D2F2F89.7080809@lightlink.com> References: <10F076988BBF4FAD82910A1175509450@PCserafino> <4D2F2F89.7080809@lightlink.com> Message-ID: On Thu, Jan 13, 2011 at 8:59 AM, Richard Loosemore wrote: > scerir wrote: >> >> What should a reasonable person believe about the Singularity? >> >> http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/ > > I posted the following reply to Michael Nielsen's essay: I posted something similar. MN seemed to discount the first anon's rather correct summary of his thoughts. Yes, the Singularity would require a series of steps to occur. No, you can't just blindly slap a "reasonable" range of probabilities on each step, multiply them together (assuming the same null hypothesis in each case: if any one of those steps fails, the world remains as it is today), and be done with it. From hkeithhenson at gmail.com Thu Jan 13 17:41:59 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 13 Jan 2011 10:41:59 -0700 Subject: [ExI] "Feral" humans, NOT Message-ID: On Thu, Jan 13, 2011 at 5:00 AM, BillK wrote: > > On Thu, Jan 13, 2011 at 6:22 AM, Keith Henson ?wrote: > >> If schizophrenia is the result of a reactivated infection of HERV-W, >> then we no longer have to think of the person who has schizophrenia as >> being at fault any more than a person who has TB or cancer. >> >> And perhaps we can be more observant, especially if people can be >> tested for antibodies to the virus. ?You see a friend start acting >> weird and withdrawing, maybe they should be tested and treated because >> it there might be a cause for the odd behavior that they have no >> control over. >> >> It's possible that a high level of HERV-W should be reason to keep >> people from buying guns. > > Yes, if someone acts weird (like not supporting the government) then > they should be treated for dangerous behaviour. ?Millions of children > in the US already get fed psychiatric drugs to treat behavioural > problems. It's has potential for abuse I fully admit. ADD kids on attention concentration drugs is a ghodsend in some cases. But what I am concerned about here is late adolescence or early adult onset schizophrenia. Caught early and treated most people who have it can lead productive lives. Untreated it seems to have a bad feedback loop and the prognosis is not good. One of the most serious problems is that people withdraw from people who start acting weird--like happened in Jared's case. This has negative effects similar to being put in solitary confinement. Shooting up a political gathering is 6 or 7 sigma out even for florid schizophrenia. But I know of at least two incidences where schizophrenia was untreated because of a certain cult's belief against any psychiatric treatment. They did not end well. http://en.wikipedia.org/wiki/Elli_Perkins http://en.wikipedia.org/wiki/Scientology_in_Australia#Revesby_murder In both cases relatives prevented the administration of anti psychotic drugs. *IF* HERV-W is causal and is something that can be tested, then wide spread awareness of the behavior and thinking changes that go with the onset of schizophrenia might result in people being referred into medical treatment early. This would take fairly serious education of people to watch for these kinds of changes in their friends. Or it might be that everyone should be tested for active HERV-W and maybe treated if they show signs that this retro virus is active. Testing and treating might reduce or eliminate MS as well as bipolar disease. Sixty million years is a *long* time for an infection. Keith From kanzure at gmail.com Thu Jan 13 17:32:03 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 13 Jan 2011 11:32:03 -0600 Subject: [ExI] Michael Nielsen on Singularity In-Reply-To: <10F076988BBF4FAD82910A1175509450@PCserafino> References: <10F076988BBF4FAD82910A1175509450@PCserafino> Message-ID: On Thu, Jan 13, 2011 at 9:50 AM, scerir wrote: > What should a reasonable person believe about the Singularity? > > http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/ > I was working on a project back in 2008 that is relevant here: http://theuncertainfuture.com/ Sorry about java. :-( -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Thu Jan 13 20:16:10 2011 From: anders at aleph.se (Anders Sandberg) Date: Thu, 13 Jan 2011 21:16:10 +0100 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: <4D2F3737.7070005@lightlink.com> References: <4D2F3737.7070005@lightlink.com> Message-ID: <4D2F5D8A.3040306@aleph.se> Richard Loosemore wrote: > Keith Henson wrote: >> As usual, airtight reasoning from Anders. > > Uh, not so fast. Thanks Keith, and you're right, Richard. :-) In fact, I have partially redone the calculations more carefully and found a few minor issues. I will post them on my blog a bit later (right now I am sitting on the London-Oxford bus, hardly the best place for getting probability theory stringent). Basically, it turns out that the risk of being harmed from a single-victim feral is a bit larger in small groups (avoid elevators!) because of the smaller pool of potential victims. And for power-law distributed terrorism there is a situation where there exist a finite most dangerous group size for a given probability of people being terrorists and for the damage exponent. But they hardly change my core conclusions. > > Terrorism (or feral actions, if you will) are often not designed to > target the individuals they hurt directly, but to target the > perceptions of the majority of society. Yup. In many ways this is a good thing, since terrorists do not seem to maximize lethality. > > So, please redo the calculations and include the probability of "side > effects" such as these, which utterly dwarf the direct effects. > > (Hint: correct answer is that the probabilities cannot be computed in > any meaningful way). Depends on whether you are a subjectivist or not about probabilities. I see no problem with saying that the risk of being affected is = P(me affected|side effects) P(side effects|terrorism) P(terrorist act). The probabilities are going to be subjective estimates, largely set by experience and messy, unreliable intuition. A more elaborate model taking real world structure into account might even give better estimates, but it will still merely be a best guess. This entirely OKand rational as long as correctly update probabilities as I get new evidence; I might wish for the certainty of mathematics or firm empirical data, but in a world of unknowns and black swans this is what we have to make do with. Actually, let's play around a bit with our assumptions and see what happens. I think we have a pretty good model of terrorism being power law distributed with exponent -2.5. The amount of effect a terrorist action has depends on 1) where it happens, 2) how big it is, 3) how outrageous it is. Who can name this week's terrorist actions without googling? They all happened in the usual far-away countries we tend to skim over in our news reading, and they happened to people we do not know. Conversely, 911 was an unusually big terrorist event - it is an outlier in the data, and the effect was of course amplified by happening in a major developed country and in an outrageous fashion (not all tragedies are equal). I would model this by saying the effect probably scales with the size X as X^k, where k>1. The proper thing would be to actually check the amount of coverage different terrorist actions have got as a function of their sizes, building a proper probability model. Finally, let's make a guesstimate of how the event effect influences the chance of it influencing me. I can see an argument for a threshold effect (small ones rarely matter, big ones have a high likeliehood): a simple model would be P(affected|effect)=(effect}^p where p is another exponent > 1, and we clamp the result to [0,1]. Now, putting all this together we get P(me affected|event size = X)=CX^(k+p-2.5) where C is the normalization factor. This crude estimate already tells us something interesting. Unless k+p<2.5 (which is unlikely, since both are by assumption > 1) there is going to a be a critical terrorism size that affects everybody. This is in many ways the terrorist sweet spot: it is hard to make big X attacks, but if you reach a sufficient size you will get global effects - it actually doesn't pay making bigger attacks. If k+p<2.5 big attacks do not pay: too hard to do, and there is insufficient reaction to them. So other forms of "politics by other means" are needed. So if we want to reduce terrorism it might be interesting to consider *ignoring* it to a certain extent - overreactions play into the hands of terrorists (and anti-terrorists, of course). (OK, this is as far as I could get between London and Oxford...) Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From anders at aleph.se Thu Jan 13 20:29:54 2011 From: anders at aleph.se (Anders Sandberg) Date: Thu, 13 Jan 2011 21:29:54 +0100 Subject: [ExI] Postdoctoral Research Fellowship in Ethics and Geo-engineering Governance In-Reply-To: <011BA738B7D5054F9D8A1FC09ABB36BF4BA4FA1802@EXMBX02.ad.oak.ox.ac.uk> References: <011BA738B7D5054F9D8A1FC09ABB36BF4BA4FA1802@EXMBX02.ad.oak.ox.ac.uk> Message-ID: <4D2F60C2.80706@aleph.se> Might be of some interest to some people on this list: (Oxford, where ethics is FUN!) > *University of Oxford Faculty of Philosophy and **Sa?d Business > School** **Postdoctoral Research Fellowship in Ethics** **and > Geo**-**engineering Governance* > > Protocol reference number: HUM/10033F/E. > > Grade 8: ?36,715 - ?43,840 per annum at 1 October 2010. > Fixed-term for two years from date of appointment. > > Applications are invited for a full-time Postdoctoral Research > Fellowship in Ethics > and Geo-engineering Governance to work on a project in the newly > formed Oxford > Geo-engineering Programme (OGP). The post will be jointly hosted by > the Institute for > Science and Ethics (ISE) (part of the Faculty of Philosophy) and the > Institute for > Science, Innovation and Society (InSIS) (part of the Sa?d Business > School), both of > which are part of the Oxford Martin School. > > The Research Fellow will conduct research on the ethical, legal and > governance > implications of advances in geo-engineering and will be expected to > publish original, > high-quality research. In addition to research responsibilities the > postholder will be > expected to contribute to the project in other ways, which may > include, for example, > involvement in conference or other event organisation and engaging in > collaborations > with external researchers. > > The fellowship is for two years from the date of appointment and the > postholder will > be a Research Fellow of both ISE and InSIS. ISE is based at Littlegate > House, central > Oxford and InSIS is based at the Sa?d Business School nearby. > > Candidates should have a strong academic background in one or more of > the following: (1) Philosophy; (2) Politics and International > Relations; (3) Environmental Sociology; (4) Political Anthropology; > (4) Law; (5) Science and Technology Studies. By the date of > appointment, candidates should have received (or submitted their > thesis for) the degree of PhD (or equivalent). > > Further particulars including details of the application procedure are > available from: > > _www.philosophy.ox.ac.uk/vacancies_ > > _http://www.ise.ox.ac.uk/get_involved/vacancy_ > _www.practicalethics.ox.ac.uk/vacancies.htm_ > > _www.insis.ox.ac.uk_ > > or directly from Deborah Sheehan, ISE, Suite 8, Littlegate House, > 16-17 St Ebbes St., Oxford OX1 1PT. Telephone: +44 (0)1865 286888, > e-mail _deborah.sheehan at philosophy.ox.ac.uk_ > . > > *Deadline for receipt of applications: noon (GMT) on Monday 14 > February 2011**.* > > > -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From atymes at gmail.com Thu Jan 13 20:30:33 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 13 Jan 2011 12:30:33 -0800 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: <4D2F5D8A.3040306@aleph.se> References: <4D2F3737.7070005@lightlink.com> <4D2F5D8A.3040306@aleph.se> Message-ID: On Thu, Jan 13, 2011 at 12:16 PM, Anders Sandberg wrote: > So if we want to reduce terrorism it might be interesting > to consider *ignoring* it to a certain extent - overreactions play into the > hands of terrorists (and anti-terrorists, of course). This is known, of course. The problem is, how do we prevent most people from getting into hysterics over terrorists, or from taking action with long term consequences before the hysteria subsides? A parallel can be made to immune systems. When injury occurs, such as from a virus/from a terrorist attack, the system kicks up to mitigate the injury and prevent further damage. However, in some cases, the system's reaction can do more damage - even fatal levels. (For example, autoimmune disorders - which, since they're fundamentally a failure to recognize parts of self as self, provoking attacks against parts useful or required for continued existence, arguably apply to both types of systems here. What happens when, say, the average citizen is so aware of police abuses that the concept of anyone trying to impose law and order becomes anathema, so no one supports those trying to prevent thugs and warlords from taking over for personal benefit?) Drugs have been developed to combat this in biological immune systems. Is there an equivalent for societal ones? From rpwl at lightlink.com Thu Jan 13 21:04:49 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 13 Jan 2011 16:04:49 -0500 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: <4D2F5D8A.3040306@aleph.se> References: <4D2F3737.7070005@lightlink.com> <4D2F5D8A.3040306@aleph.se> Message-ID: <4D2F68F1.7020107@lightlink.com> Anders Sandberg wrote: > Richard Loosemore wrote: >> Keith Henson wrote: >>> As usual, airtight reasoning from Anders. >> >> Uh, not so fast. > Thanks Keith, and you're right, Richard. :-) > > In fact, I have partially redone the calculations more carefully and > found a few minor issues. I will post them on my blog a bit later (right > now I am sitting on the London-Oxford bus, hardly the best place for > getting probability theory stringent). Basically, it turns out that the > risk of being harmed from a single-victim feral is a bit larger in small > groups (avoid elevators!) because of the smaller pool of potential > victims. And for power-law distributed terrorism there is a situation > where there exist a finite most dangerous group size for a given > probability of people being terrorists and for the damage exponent. But > they hardly change my core conclusions. > >> >> Terrorism (or feral actions, if you will) are often not designed to >> target the individuals they hurt directly, but to target the >> perceptions of the majority of society. > > Yup. In many ways this is a good thing, since terrorists do not seem to > maximize lethality. > >> >> So, please redo the calculations and include the probability of "side >> effects" such as these, which utterly dwarf the direct effects. >> >> (Hint: correct answer is that the probabilities cannot be computed in >> any meaningful way). > > Depends on whether you are a subjectivist or not about probabilities. I > see no problem with saying that the risk of being affected is = P(me > affected|side effects) P(side effects|terrorism) P(terrorist act). The > probabilities are going to be subjective estimates, largely set by > experience and messy, unreliable intuition. A more elaborate model > taking real world structure into account might even give better > estimates, but it will still merely be a best guess. This entirely OKand > rational as long as correctly update probabilities as I get new > evidence; I might wish for the certainty of mathematics or firm > empirical data, but in a world of unknowns and black swans this is what > we have to make do with. > > Actually, let's play around a bit with our assumptions and see what > happens. I think we have a pretty good model of terrorism being power > law distributed with exponent -2.5. The amount of effect a terrorist > action has depends on 1) where it happens, 2) how big it is, 3) how > outrageous it is. Who can name this week's terrorist actions without > googling? They all happened in the usual far-away countries we tend to > skim over in our news reading, and they happened to people we do not > know. Conversely, 911 was an unusually big terrorist event - it is an > outlier in the data, and the effect was of course amplified by happening > in a major developed country and in an outrageous fashion (not all > tragedies are equal). I would model this by saying the effect probably > scales with the size X as X^k, where k>1. The proper thing would be to > actually check the amount of coverage different terrorist actions have > got as a function of their sizes, building a proper probability model. > Finally, let's make a guesstimate of how the event effect influences the > chance of it influencing me. I can see an argument for a threshold > effect (small ones rarely matter, big ones have a high likeliehood): a > simple model would be P(affected|effect)=(effect}^p where p is another > exponent > 1, and we clamp the result to [0,1]. Now, putting all this > together we get P(me affected|event size = X)=CX^(k+p-2.5) where C is > the normalization factor. > > This crude estimate already tells us something interesting. Unless > k+p<2.5 (which is unlikely, since both are by assumption > 1) there is > going to a be a critical terrorism size that affects everybody. This is > in many ways the terrorist sweet spot: it is hard to make big X attacks, > but if you reach a sufficient size you will get global effects - it > actually doesn't pay making bigger attacks. If k+p<2.5 big attacks do > not pay: too hard to do, and there is insufficient reaction to them. So > other forms of "politics by other means" are needed. So if we want to > reduce terrorism it might be interesting to consider *ignoring* it to a > certain extent - overreactions play into the hands of terrorists (and > anti-terrorists, of course). > > (OK, this is as far as I could get between London and Oxford...) I think the only problem now is that whereas your first calculation was relatively tight (absent minor errors), you have allowed yourself such a degree of wiggle-room in the assumptions built into your new calculation of the side effects of terrorism that your eventual conclusion was that the range of probability-of-being-affected could easily have an upper bound of 1. (I.e. everybody would be affected). Which was kind of what I said, but in words... :-) The important factors, as always, lie elsewhere (cf my post on the IEET.org website last week). What matters most is actually the damping factor, not the terrorism itself. The damping factor for U.S. terrorism targets is currently Fox "News" and its legislative branch .... and that, of course, is a negative damping mechanisms (i.e. amplification). Conclusion is: forget the terrorists and target the terrorism amplification mechanisms. Richard Loosemore From jonkc at bellsouth.net Thu Jan 13 21:02:41 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 13 Jan 2011 16:02:41 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <504077.46907.qm@web114418.mail.gq1.yahoo.com> <007401cbb221$ddd06c80$99714580$@att.net> Message-ID: <9C57987E-87F9-405C-B7B3-A685F614823C@bellsouth.net> On Jan 12, 2011, at 2:34 AM, Adrian Tymes wrote: > Faith in god, like many things, can be used for good or ill. Just because we > see how it is so often (arguably the majority of the time) used for > ill, does not mean we must disavow that it can ever have purely beneficial results. Faith in god is very widespread so it would be unrealistic to expect that it had never done some good somewhere sometime, but I can say with some confidence that it is NEVER purely beneficial and is as close to being purely detrimental as anything yet known. > you could alter the faith in god to, say, faith in humanity Then you would passionately believe in something about humanity that the evidence does not support and may even contradict; that doesn't sound like a very good thing to me. Believing in something for no good reason is not a virtue it is a vice. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Thu Jan 13 21:32:35 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 13 Jan 2011 13:32:35 -0800 (PST) Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: Message-ID: <345060.81856.qm@web114411.mail.gq1.yahoo.com> John Grigg wrote: > Ben, bellow forth with your best rendition of a mad > scientist/supervillain laugh, and tell your classmate that > you and all > the other evil transhumanists are only getting started with > the master > plan to exploit the weak and conquer the world. > > "You have seen nothing yet!!!" > John, you are getting Darren & I mixed up. He's the one with classmates. I'm the one with the mad scientist laugh. Ben Zaiboc (muahahaha!) From stefano.vaj at gmail.com Thu Jan 13 23:06:21 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 14 Jan 2011 00:06:21 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <9C57987E-87F9-405C-B7B3-A685F614823C@bellsouth.net> References: <504077.46907.qm@web114418.mail.gq1.yahoo.com> <007401cbb221$ddd06c80$99714580$@att.net> <9C57987E-87F9-405C-B7B3-A685F614823C@bellsouth.net> Message-ID: 2011/1/13 John Clark > > Faith in god is very widespread so it would be unrealistic to expect that > it had never done some good somewhere sometime, but I can say with some > confidence that it is NEVER purely beneficial and is as close to being > purely detrimental as anything yet known. > Come on. Actual faith in an entity with Allah/God/Jahve features is indeed a very peculiar and limited phenomenon, both in time and space, unless for our own projections on entirely different cultural contexts. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Jan 14 02:50:01 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 14 Jan 2011 13:50:01 +1100 Subject: [ExI] "Feral" humans, NOT In-Reply-To: References: Message-ID: On Thu, Jan 13, 2011 at 8:58 PM, BillK wrote: > Yes, if someone acts weird (like not supporting the government) then > they should be treated for dangerous behaviour. ?Millions of children > in the US already get fed psychiatric drugs to treat behavioural > problems. There are multiple features in the clinical examination which identify a belief or behaviour as psychotically driven but whether it has potentially dangerous consequences is not one of them. Moreover, if a belief or behaviour is not psychotically driven then antipsychotic medication will make no difference. For example, if a person is religious because they were brought up that way by their parents medication will not have any effect on their beliefs. But if they became religious as a result of schizophrenia there will be other associated symptoms and medication will in most cases make the beliefs go away or diminish in intensity. The beliefs in themselves may be just as bizarre in each case, but the cause is different. -- Stathis Papaioannou From jonkc at bellsouth.net Fri Jan 14 03:10:40 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 13 Jan 2011 22:10:40 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <504077.46907.qm@web114418.mail.gq1.yahoo.com> <007401cbb221$ddd06c80$99714580$@att.net> <9C57987E-87F9-405C-B7B3-A685F614823C@bellsouth.net> Message-ID: <46AAC9F8-DCC3-4B80-8E66-EF4BEB7BF0A7@bellsouth.net> On Jan 13, 2011, at 6:06 PM, Stefano Vaj wrote: > Actual faith in an entity with Allah/God/Jahve features is indeed a very peculiar and limited phenomenon I wish that were true but humanity has been creating countless gods for many thousands of years and has been doing so in every culture except for the scientific one, but that culture is quite small and has only been around for a few hundred years. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 14 03:43:58 2011 From: spike66 at att.net (spike) Date: Thu, 13 Jan 2011 19:43:58 -0800 Subject: [ExI] "Feral" humans, NOT In-Reply-To: References: Message-ID: <003501cbb39d$46cae310$d460a930$@att.net> Subject: Re: [ExI] "Feral" humans, NOT On Thu, Jan 13, 2011 at 8:58 PM, BillK wrote: > Yes, if someone acts weird (like not supporting the government) then > they should be treated for dangerous behaviour. Millions of children > in the US already get fed psychiatric drugs to treat behavioural > problems... Ja. It concerns me that ADD in particular may be over-diagnosed, when really we are seeing decreasing attention spans as a result of video games. Compare any Hollywood movie made before about 1950 with any one made after about 1990 in the following manner. Start the movie, mute the sound, turn off all the lights and turn your chair away from the screen, so all you see is the reflected glow from the TV. In the pre-1950 movie, one sees a steady glow reflection for about half a minute or more at a time. The modern movie is constant flash flash flash, change every 5 seconds or so. Now compare any modern court drama with the 1950s Perry Mason equivalent. Discussions go on in the courtroom for ten minutes at a time, and one must concentrate for that entire time to follow the remarkably complex story lines. We have mostly lost the ability or the desire to do that now. We can take in information faster, but we expect to have more control over our information input. Our attention spans have shortened. I welcome Stathis' comments on this. This takes me to something I am worrying a lot about recently: how education is reacting to shortened attention spans, not just in children but in all of us. They still expect kids to sit still and listen to a teacher deliver a lecture. Many are called ADD, when they might just be jumpy and bored. We have failed to adjust education as our computers have adjusted us. spike From msd001 at gmail.com Fri Jan 14 04:23:23 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 13 Jan 2011 23:23:23 -0500 Subject: [ExI] "Feral" humans, NOT In-Reply-To: <003501cbb39d$46cae310$d460a930$@att.net> References: <003501cbb39d$46cae310$d460a930$@att.net> Message-ID: On Thu, Jan 13, 2011 at 10:43 PM, spike wrote: > Ja. ?It concerns me that ADD in particular may be over-diagnosed, when really we are seeing decreasing attention spans as a result of video games. ?Compare any Hollywood movie made before about 1950 with any one made after about 1990 in the following manner. ?Start the movie, mute the sound, turn off all the lights and turn your chair away from the screen, so all you see is the reflected glow from the TV. ?In the pre-1950 movie, one sees a steady glow reflection for about half a minute or more at a time. ?The modern movie is constant flash flash flash, change every 5 seconds or so. I understand the point you are making but I have to draw some attention to the matter-of-fact way you depict the cause and effect with "...decreasing attention spans as a result of video games." That's too simple. It's been shoved on us too many times without any convincing proof. I grew up with "video games" and resent this attack. :) Anyone who has spent 60+ hours to beat a modern immersion-style puzzle would not claim their attention is reduced by such an obsession. Those old Atari games held my attention for hours despite being fairly basic and repetitive. At 10 years old, Tunnels of Doom on a TI99/4a was a week's worth of work for an 8 floor dungeon. How is does that shorten attention span? Blame TV all you want, but don't throw video games under the proverbial bus simply because they're an easy scapegoat. It's surely those 1950's generation TV shows that started the decline... Or maybe it was technicolor that started it - life was far better in black & white. No, perhaps it was the horseless carriage driving all over at reckless top speeds of 30mph: humans could no longer afford the luxury of a stroll down the country lane without some young punk and his car ruining the pastoral beauty. Maybe it goes back to the locomotive; all that rapid transit made us care if cattle would make it to market before the end of next week. That started our obsession with getting things done in reasonable time. I don't think I can go much farther back... it's just a blur of thousands of years walking around bashing animals with rocks and other crude tools... all the way back to that first monkey who touched the monolith. I think my point is that there are numerous cooperating influences that shorten our attention span. We had the eyes of a predator facing forward with focus on our prey, we're adapting to eyes of the prey on both sides of our heads for constant vigilance. Yes; everything is moving faster and we're barely able to twitch quickly enough to keep up... > Now compare any modern court drama with the 1950s Perry Mason equivalent. ?Discussions go on in the courtroom for ten minutes at a time, and one must concentrate for that entire time to follow the remarkably complex story lines. ?We have mostly lost the ability or the desire to do that now. ?We can take in information faster, but we expect to have more control over our information input. ?Our attention spans have shortened. ? I welcome Stathis' comments on this. > > This takes me to something I am worrying a lot about recently: how education is reacting to shortened attention spans, not just in children but in all of us. ?They still expect kids to sit still and listen to a teacher deliver a lecture. ?Many are called ADD, when they might just be jumpy and bored. > > We have failed to adjust education as our computers have adjusted us. On this point I completely agree. Education is dangerously overdue for innovation. From msd001 at gmail.com Fri Jan 14 04:29:30 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 13 Jan 2011 23:29:30 -0500 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: <4D2F68F1.7020107@lightlink.com> References: <4D2F3737.7070005@lightlink.com> <4D2F5D8A.3040306@aleph.se> <4D2F68F1.7020107@lightlink.com> Message-ID: On Thu, Jan 13, 2011 at 4:04 PM, Richard Loosemore wrote: > Conclusion is: ?forget the terrorists and target the terrorism amplification > mechanisms. I had to check to make sure that wasn't a Keith Henson quote; it sounds very much like the conclusion of many EP analyses :) From spike66 at att.net Fri Jan 14 04:45:56 2011 From: spike66 at att.net (spike) Date: Thu, 13 Jan 2011 20:45:56 -0800 Subject: [ExI] i'll take computers for 100 please alex... Message-ID: <005501cbb3a5$ef370fd0$cda52f70$@att.net> Oh my, is this cool or what: http://www.foxnews.com/scitech/2011/01/13/ibm-watson-takes-jeopardy-champs/ spike From jonkc at bellsouth.net Fri Jan 14 06:25:53 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 14 Jan 2011 01:25:53 -0500 Subject: [ExI] i'll take computers for 100 please alex... In-Reply-To: <005501cbb3a5$ef370fd0$cda52f70$@att.net> References: <005501cbb3a5$ef370fd0$cda52f70$@att.net> Message-ID: On Jan 13, 2011, at 11:45 PM, spike wrote: > > Oh my, is this cool or what: > > http://www.foxnews.com/scitech/2011/01/13/ibm-watson-takes-jeopardy-champs/ Very cool indeed! You can watch about 4 minutes of it at: http://www.youtube.com/watch?v=M3On3Td9x8g John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 14 06:43:02 2011 From: spike66 at att.net (spike) Date: Thu, 13 Jan 2011 22:43:02 -0800 Subject: [ExI] "Feral" humans, NOT In-Reply-To: References: Message-ID: <000401cbb3b6$4abde5d0$e039b170$@att.net> This video actually supports Keith's notion (I think it was Keith, whoever posted it) that the internet creates the illusion that this sort of thing is rising when it is actually more endemic. The local authorities didn't even hear of this cat fight until it was posted on YouTube: http://www.cnn.com/video/#/video/bestoftv/2011/01/13/exp.nr.gas.station.braw l.cnn?hpt=T2 Side note, this occurred not far from where I grew up. I can testify, stuff like this did happen regularly, and if so, people would generally get scarce immediately afterwards, then the cops would cruise by about 15 minutes later, no fight going on, so they would head off to a safer area of town. {8^D spike From anders at aleph.se Fri Jan 14 09:20:02 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 14 Jan 2011 10:20:02 +0100 Subject: [ExI] i'll take computers for 100 please alex... In-Reply-To: References: <005501cbb3a5$ef370fd0$cda52f70$@att.net> Message-ID: <4D301542.2000103@aleph.se> At FHI we have a probability board where we estimate the probability of a human or machine win. Quite a large spread of estimates, but I think the median is around a 75% chance of a machine win (I am guessing a 50% chance myself). -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From js_exi at gnolls.org Fri Jan 14 09:51:56 2011 From: js_exi at gnolls.org (J. Stanton) Date: Fri, 14 Jan 2011 01:51:56 -0800 Subject: [ExI] A better option than fish oil? Message-ID: <4D301CBC.2070605@gnolls.org> Sondre Bjell?s wrote: > http://www.boston.com/news/science/articles/2010/06/25/jaw_dropping_levels_of_heavy_metals_found_in_whales/ That's true, but fish oil capsules don't come from whales. They come from (usually) menhaden, and (sometimes) anchovies or sardines. If you're interested, you can look up the relative methylmercury levels of various commercial fish and shellfish here: http://www.fda.gov/Food/FoodSafety/Product-SpecificInformation/Seafood/FoodbornePathogensContaminants/Methylmercury/ucm115644.htm Not surprisingly, the higher up the food chain a fish is and the longer-lived it is, the more methylmercury it contains (as a general rule). Toxins bioaccumulate. The reason sperm whales are so full of poison is that they're huge, long-lived filter feeders: they strain the ocean for decades at a time (their lifespan is basically the same as a human's). Little plankton feeders like menhaden, anchovies, and sardines don't live more than a couple-few years. AWESOME BONUS for anyone who reads this far: http://www.youtube.com/watch?v=tl4T26O0eq0 J. Stanton http://www.gnolls.org > On Fri, Jan 7, 2011 at 8:01 PM, J. Stanton wrote: >> On 1/7/11 4:00 AM, Sondre Bjell?s wrote: >> >>> The ocean ain't exactly a >>> pure ("clean") substance, a better alternative is plant-based oils. >>> Though a sales guy would say anything to sell his products, no matter >>> what real effect or fact or truth their products have. - Sondre On From eugen at leitl.org Fri Jan 14 10:06:06 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 14 Jan 2011 11:06:06 +0100 Subject: [ExI] "Feral" humans, NOT In-Reply-To: References: <003501cbb39d$46cae310$d460a930$@att.net> Message-ID: <20110114100606.GD16518@leitl.org> On Thu, Jan 13, 2011 at 11:23:23PM -0500, Mike Dougherty wrote: > I understand the point you are making but I have to draw some > attention to the matter-of-fact way you depict the cause and effect > with "...decreasing attention spans as a result of video games." Not just video games, electronic media in general. The result is beyond belief. You have no idea how bad it is. No idea. Yes, there are exceptions. There always are. They hardly matter, though. > That's too simple. It's been shoved on us too many times without any > convincing proof. I grew up with "video games" and resent this > attack. :) Anyone who has spent 60+ hours to beat a modern > immersion-style puzzle would not claim their attention is reduced by > such an obsession. Those old Atari games held my attention for hours > despite being fairly basic and repetitive. At 10 years old, Tunnels > of Doom on a TI99/4a was a week's worth of work for an 8 floor > dungeon. How is does that shorten attention span? ... -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stefano.vaj at gmail.com Fri Jan 14 11:02:37 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 14 Jan 2011 12:02:37 +0100 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: On 12 January 2011 18:24, Adrian Tymes wrote: > Difference is, one could - theoretically - return (to Europe, if not to > England) from Australia. ?Merchants did it a lot. I do not think that the real issue is the ability to go back and forth and to spend vacations at home. In history, this has in fact been pretty rare amongst migrants. The real issue is whether there might be some undetermined-time sustainability for, and some possible travelling back to Earth from, the very group which emigrated in the first place. If such group is doomed to programmed extinction anyway, be it after the expiration of the lifespan of its members, I understand that many fail to see the point of embarking in such a project. -- Stefano Vaj From stefano.vaj at gmail.com Fri Jan 14 11:04:42 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 14 Jan 2011 12:04:42 +0100 Subject: [ExI] one way trip to mars In-Reply-To: References: <002001cbb0e5$01d49720$057dc560$@att.net> Message-ID: On 13 January 2011 01:28, Mike Dougherty wrote: > So I'll put your name on the list (next to mine) that going to mars > "in-person" is a silly idea. ?Send the machines. How that would be sillier than going in person on the top of the Everest? -- Stefano Vaj From pharos at gmail.com Fri Jan 14 11:37:00 2011 From: pharos at gmail.com (BillK) Date: Fri, 14 Jan 2011 11:37:00 +0000 Subject: [ExI] "Feral" humans, NOT In-Reply-To: <20110114100606.GD16518@leitl.org> References: <003501cbb39d$46cae310$d460a930$@att.net> <20110114100606.GD16518@leitl.org> Message-ID: On Fri, Jan 14, 2011 at 10:06 AM, Eugen Leitl wrote: > Not just video games, electronic media in general. The result is > beyond belief. You have no idea how bad it is. No idea. > > Yes, there are exceptions. There always are. They hardly matter, though. > This is sociology, so of course it is full of confusion, contradictions and exceptions. If you doubt that the decreased attention span effect exists ask the people whose livelihood depends on it. Advertisers are making greater and greater efforts to get attention to their wares. Teachers are noticing their classes doing everything except listening to the lecture. The problem is that the human brain is single-stream. No multi-core processors here. So there are basically two methods humans use to cope with the electronic flood of data. 1) Attempt multi-tasking. 2) Filtering the flood. 1) Leads to the 8 second attention span. Continual switching between twitter, facebook, IMS, etc. If this becomes a permanent state, then it is obviously counter-productive. Deep thought doesn't happen here. This seems to apply mostly to the younger generation who have grown up in this environment. They live in a world of continual interruptions, always-on, updating each other on what the latest gossip is. Perhaps they don't realise that life doesn't have to be like this and anyway, they seem to like it that way. 2) Filtering is the solution that the older generation generally choose. Adblock gets rid of the ads shouting at you. Pruning the RSS feeds that you are prepared to spend time reading. 'Lifehacker' productivity solutions get applied. Use the 80/20 rule. 80% of everything can be thrown out. Keep the 20% useful stuff. And so on. BillK From natasha at natasha.cc Fri Jan 14 15:45:14 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 14 Jan 2011 09:45:14 -0600 Subject: [ExI] i'll take computers for 100 please alex... In-Reply-To: References: <005501cbb3a5$ef370fd0$cda52f70$@att.net> Message-ID: <4EE5F10705C84163BFAFAC1A3B87D2C1@DFC68LF1> Thanks for posting link! Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Sent: Friday, January 14, 2011 12:26 AM To: ExI chat list Subject: Re: [ExI] i'll take computers for 100 please alex... On Jan 13, 2011, at 11:45 PM, spike wrote: Oh my, is this cool or what: http://www.foxnews.com/scitech/2011/01/13/ibm-watson-takes-jeopardy-champs/ Very cool indeed! You can watch about 4 minutes of it at: http://www.youtube.com/watch?v=M3On3Td9x8g John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Fri Jan 14 15:51:25 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 14 Jan 2011 16:51:25 +0100 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: References: <4D2F3737.7070005@lightlink.com> <4D2F5D8A.3040306@aleph.se> Message-ID: <4D3070FD.7010100@aleph.se> I have put up my calculations now at http://www.aleph.se/andart/archives/2011/01/_how_dangerous_is_it_to_be_in_a_crowd.html A fun finding is that indeed, elevators are more dangerous than large crowds. Terrorism makes super-large crowds more dangerous, but the effect is minor (since terrorism is usually a minor cause of death). Incidentally, I noticed that school shootings seem to have a power law distribution with exponent about -2. Adrian Tymes wrote: > This is known, of course. The problem is, how do we prevent most people > from getting into hysterics over terrorists, or from taking action > with long term > consequences before the hysteria subsides? > How do we make people and political systems from overreacting to strong signals, deliberately intended to cause overreaction? One clear approach is that a diverse system is harder to game than a simple or homogeneous one: it is harder to tune the signal. Similarly adding a bit of slowness to the system makes it less likely to overreact - but a problem is that many societies react to this *useful* aspect by trying to reduce the lag, making themselves more sensitive. The corrective processes on the other hand retain their normal slow timescale, making the system on average more biased. > Drugs have been developed to combat this in biological immune systems. Is > there an equivalent for societal ones? > The drugs are applied by an external body that is aware of the problem of the immune system. For a society this would be some extra body that has the right to inhibit or block decisions in quite core functions, yet is insulated from them and whose duty is to maintain the integrity of the system. Maybe a constitutional court? -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From hkeithhenson at gmail.com Fri Jan 14 16:41:47 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 14 Jan 2011 09:41:47 -0700 Subject: [ExI] Probability of being affected by terrorism Message-ID: On Thu, Jan 13, 2011 at 11:53 PM, Anders Sandberg wrote: snip > Actually, let's play around a bit with our assumptions and see what > happens. I think we have a pretty good model of terrorism being power > law distributed with exponent -2.5. http://physicsworld.com/cws/article/news/21465 "The New Mexico pair found that the probability of an event with a severity of x or higher was proportional to x-?, where the scaling parameter ? has a value close to two (see figure). Moreover, they showed that the distributions did not fit other "heavy-tailed" distributions like a log-normal curve. According to Clauset and Young, the results show that extreme events like September 11 are not "outliers" but part of the overall pattern of terrorist attacks that is "scale invariant". "Unfortunately, the implications of the scale invariance are almost all negative," Clauset and Young told PhysicsWeb. "For example, because the scaling parameter is less than two, the size of the largest terrorist attack to date will only grow with time. If we assume that the scaling relationship and the frequency of events do not change in the future, we can expect to see another attack at least as severe as September 11 within the next seven years." Clauset and Young also suggest that the behaviour they observe is an extension of the still unexplained scale invariance between the frequency and intensity of wars. " Another example of power law is here: http://en.wikipedia.org/wiki/Power_outage#Self-organized_criticality The amount of effect a terrorist > action has depends on 1) where it happens, 2) how big it is, 3) how > outrageous it is. Who can name this week's terrorist actions without > googling? They all happened in the usual far-away countries we tend to > skim over in our news reading, and they happened to people we do not > know. Conversely, 911 was an unusually big terrorist event - it is an > outlier in the data, and the effect was of course amplified by happening > in a major developed country and in an outrageous fashion (not all > tragedies are equal). I suppose the case can be made that 911 affected everyone in the world. Historically it might be seen as starting the end of the US as the top world power, though other things might be seen as more important. Another factor is that some of these events, particularly 911 are one time events. Having done so once makes it nearly impossible to do it again. Though the governmental response has been of very questionable effect, airline passengers are never going to let this happen again. In fact, only 3 of the first 4 worked because the passengers in the 4th plane rose up against the hijackers when they found out the fate of the other 3 planes. snip (analysis of events) It perhaps worth thinking about what form the next event on this scale or larger might take. We have already had chemical in the Sarin gas attack, and diversion of aircraft. Biological and nuclear are left. The FDA is currently upset that the unlicensed Botox coming into the country is the real stuff. So there are people (probably in India) who know how to grow Clostridium botulinum. Enough to us as an aerosol for a large gathering would be possible, but rather expensive. Smallpox or monkey pox incorporating interleukin II is possible. The Russians are reported to have 20 tons of smallpox left over from the cold war, but it's really unlikely they would hand some out. Nuclear weapons may be a lot simpler to build than people think. For example, the use of LEDs is not yet appreciated. Likewise given a neutron source, the production of super high grade plutonium 239 from DU may be possible. But if I had to guess, I would say the most likely source of a terrorist nuke going off in a city would be from Pakistan's weapons being used as terror weapons, say against a city in India. I suppose that some versions of the singularity could count. Seems very likely that would affect everyone. Keith From anders at aleph.se Fri Jan 14 17:23:57 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 14 Jan 2011 18:23:57 +0100 Subject: [ExI] Probability of being affected by terrorism In-Reply-To: References: Message-ID: <4D3086AD.8000107@aleph.se> Keith Henson wrote: > On Thu, Jan 13, 2011 at 11:53 PM, Anders Sandberg wrote: > >> Actually, let's play around a bit with our assumptions and see what >> happens. I think we have a pretty good model of terrorism being power >> law distributed with exponent -2.5. >> > > http://physicsworld.com/cws/article/news/21465 > Exactly. I was even citing Clauset et al. in my blog. The reasons for this power law are more obscure. They have an interesting paper showing that terrorist groups also learn and have a development trajectory, http://arxiv.org/abs/0906.3287 Generally, any phenomena with a long/heavy tail has to look like a power law multiplied with a "slowly vaying function" due to mathematical constraints. There are also things reminiscent of the central limit theorem making them likely outcomes in a lot of systems. The truly dangerous attacks are the ones that are obvious in retrospect. I suspect the next 911-like attack will not be based on any particularly exotic technology. However, the attacks to be worried about are likely biological - they have a much flatter distribution than the other kinds, and might spread arbitrariliy. An interesting thing is that rationally for distributons with exponents between -1 and -2 we should spend nearly all our mitigation efforts on the far tail, ignoring the typical small attacks. That is of course not popular. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From spike66 at att.net Fri Jan 14 18:47:34 2011 From: spike66 at att.net (spike) Date: Fri, 14 Jan 2011 10:47:34 -0800 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: <4D3070FD.7010100@aleph.se> References: <4D2F3737.7070005@lightlink.com> <4D2F5D8A.3040306@aleph.se> <4D3070FD.7010100@aleph.se> Message-ID: <007b01cbb41b$82171aa0$86454fe0$@att.net> ... On Behalf Of Anders Sandberg ... I have put up my calculations now at http://www.aleph.se/andart/archives/2011/01/_how_dangerous_is_it_to_be_in_a_ crowd.html A fun finding is that indeed, elevators are more dangerous than large crowds. Terrorism makes super-large crowds more dangerous, but the effect is minor (since terrorism is usually a minor cause of death). Incidentally, I noticed that school shootings seem to have a power law distribution with exponent about -2. ... -- Anders Sandberg I hope you are right Anders. I dread the day when terrorists discover the multiplier effect of triggering a panicked stampede, by setting off three relatively small explosions in succession in a stadium for instance, then watching a couple hundred thousand infidels simultaneously decide that anywhere is safer than where they are currently sitting, who then in unison make a murderous attempt to get somewhere else, anywhere else. My one and only attendance at a rock concert had me thinking about what I would do in a stampede. spike From spike66 at att.net Fri Jan 14 19:01:12 2011 From: spike66 at att.net (spike) Date: Fri, 14 Jan 2011 11:01:12 -0800 Subject: [ExI] flash flood in australia Message-ID: <008c01cbb41d$69866de0$3c9349a0$@att.net> Jaysus Chroist myte! http://www.wimp.com/flashflood/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Fri Jan 14 21:09:07 2011 From: sparge at gmail.com (Dave Sill) Date: Fri, 14 Jan 2011 16:09:07 -0500 Subject: [ExI] Bruce Schneier on airport security and terrorism Message-ID: This is excellent: "Last week, I spoke at an airport security conference hosted by EPIC: The Stripping of Freedom: A Careful Scan of TSA Security Procedures. Here's the video of my half-hour talk." http://www.c-spanvideo.org/program/Schne -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 14 23:18:54 2011 From: spike66 at att.net (spike) Date: Fri, 14 Jan 2011 15:18:54 -0800 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: <007b01cbb41b$82171aa0$86454fe0$@att.net> References: <4D2F3737.7070005@lightlink.com> <4D2F5D8A.3040306@aleph.se> <4D3070FD.7010100@aleph.se> <007b01cbb41b$82171aa0$86454fe0$@att.net> Message-ID: <000301cbb441$6a03a9d0$3e0afd70$@att.net> ... >>A fun finding is that indeed, elevators are more dangerous than large crowds...Anders Sandberg >I hope you are right Anders. I dread the day when terrorists discover the multiplier effect of triggering a panicked stampede...spike About five hours after I made that comment, I see this: http://news.blogs.cnn.com/2011/01/14/100-dead-in-stampede-near-indian-temple / 100 dead in stampede near Indian temple January 14th, 2011 05:51 PM ET One hundred people died and 14 were injured during a stampede near a religious temple in southern India, the home secretary of Kerala state said. - From CNN's Sumnin Udas Oy vey. One thing that immediately jumps out is whenever there is any dangerous incident where proles are physically damaged, the ratio is usually about twice as many injured as there are slain. So when you see 100 dead, 14 injured, this cannot be so. Depending on how "injured" is defined, there would be about 200 injured by an American definition. If anyone could actually be held liable, then there would be a couple thousand injured. If on the other hand, no one is liable (such as in that Florida cat fight I posted yesterday) then there are no injuries, even if some of the rioters suffered life-threatening non-injuries. In that case of course, there was no crime committed: it was an amateur team boxing match, in the heavyweight division. spike From anders at aleph.se Sat Jan 15 01:29:22 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 15 Jan 2011 01:29:22 +0000 Subject: [ExI] Probability of being affected by terrorism [WAS Re: Mass transit] In-Reply-To: <007b01cbb41b$82171aa0$86454fe0$@att.net> References: <4D2F3737.7070005@lightlink.com> <4D2F5D8A.3040306@aleph.se> <4D3070FD.7010100@aleph.se> <007b01cbb41b$82171aa0$86454fe0$@att.net> Message-ID: <4D30F872.7050900@aleph.se> spike wrote: > I hope you are right Anders. I dread the day when terrorists discover the > multiplier effect of triggering a panicked stampede, by setting off three > relatively small explosions in succession in a stadium for instance, Yup. That would achieve a great multiplier effect. Not to mention the psychological extra impact of affecting an environment most people have been in, triggering thoughts of "I could have been there". But this kind of planning is rare. Delayed explosives for when medics arrive seem to be more common. Generally terrorism planning seems to be bad and conservative, which is good. That fact that it looks like anybody with a cool head and no heart could come up with absolutely devastating attacks with a bit of (untraceable) planning implies either that it is much harder than it looks, or that there are strong countervailing reasons in the terrorist community for aiming at maximal damage. So this means we should be more concerned about non-standard terrorists than standard ones. Dark greens who think civilization must be sabotaged, new ideological groupings that have no ties to the old ways of doing terrorism, visionaries trying out "new media" for terrorism - these might lack whatever limitations hold standard terrorists back. The biggest problem with them is that they will likely not be on the radar for the agencies efficiently trying to stop terrorism (and of course, they will be completely beyond the security theatre and efforts to meet specific threats - while they are looking at passenger shoes, the airborne prions are quietly seeping through the terminal...) Still, first things first. Ageing is a far more important problem than people trying to make us feel insecure. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From sparge at gmail.com Sat Jan 15 14:23:24 2011 From: sparge at gmail.com (Dave Sill) Date: Sat, 15 Jan 2011 09:23:24 -0500 Subject: [ExI] Tenacious DNA Message-ID: This is pretty awesome: http://www.youtube.com/watch?v=FdzBSo_ZJiw I recommend watching it in HD with the volume moderately high. And it's 100% autotune free. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From alaneugenebrooks52 at yahoo.com Fri Jan 14 16:53:11 2011 From: alaneugenebrooks52 at yahoo.com (Alan Brooks) Date: Fri, 14 Jan 2011 08:53:11 -0800 (PST) Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona Message-ID: <130698.85325.qm@web46112.mail.sp1.yahoo.com> >On Thu, Jan 13, 2011 at 1:19 PM, Stathis Papaioannou? wrote: > He's psychotic, probably schizophrenic. He should have been treated. Don't be so sure until we can read the examination reports; perhaps he is not psychotic, or maybe he is borderline; it could be he wanted to be famous-- which is being 'crazy like a fox'. Was Tim McVeigh a psychotic, or was he also crazy like a fox? Or Son Of Sam, Mark David Chapman. Notice how they appeared psychotic at first but copped guilty (Berkowitz, Chapman) pleas or were found to be sane enough for trial. Even the wild Christian-rapist character who kidnapped Elizabeth Smart was found guilty! Frankly, I don't think psychiatry is much more of a science than economics, sociology, political 'science', etc. Societal and professional biases are too prevalent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Jan 15 17:13:34 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 15 Jan 2011 12:13:34 -0500 Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona In-Reply-To: <130698.85325.qm@web46112.mail.sp1.yahoo.com> References: <130698.85325.qm@web46112.mail.sp1.yahoo.com> Message-ID: <82DADEB4-5002-4485-848A-3CABB5179B92@bellsouth.net> On Thu, Jan 13, 2011 at 1:19 PM, Stathis Papaioannou wrote: > He's psychotic, probably schizophrenic. He should have been treated. The treatment I would recommend would be passing a current of 80 or 90 amps through his brain for a minute or two, I believe this procedure would improve him immeasurably. On Jan 14, 2011, at 11:53 AM, Alan Brooks wrote: > Was Tim McVeigh a psychotic I confess to having little interest in what buzz word psychiatrists associate with that name. > Or Son Of Sam, Mark David Chapman. > All these individuals could be improved through the wonders of electricity. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Jan 15 21:31:01 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 15 Jan 2011 22:31:01 +0100 Subject: [ExI] Turing -> Jeopardy Message-ID: http://www.theregister.co.uk/2011/01/14/ibm_watson_jeopardy_dry_run/ -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Sat Jan 15 20:34:48 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 15 Jan 2011 13:34:48 -0700 Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona In-Reply-To: <130698.85325.qm@web46112.mail.sp1.yahoo.com> References: <130698.85325.qm@web46112.mail.sp1.yahoo.com> Message-ID: 2011/1/14 Alan Brooks Don't be so sure until we can read the examination reports; perhaps > he is not psychotic, or maybe he is borderline; it could be he wanted to be > famous-- which is being 'crazy like a fox'. > Was Tim McVeigh a psychotic, or was he also crazy like a fox? Or Son Of > Sam, Mark David Chapman. Notice how they appeared psychotic at first but > copped guilty (Berkowitz, Chapman) pleas or were found to be sane enough for > trial. Even the wild Christian-rapist character who kidnapped Elizabeth > Smart was found guilty! > > Frankly, I don't think psychiatry is much more of a science than > economics, sociology, political 'science', etc. Societal and professional > biases are too prevalent. > "Societal and professional biases are too prevalent." Indeed. King George considered George Washington a traitor and a terrorist, and that would no doubt be the prevailing view had the revolution failed. "History is written by the victors." There's little grant money, and rarely a tenure track for intellectuals who bite the hand of the established order. The legion of ambitious intellectuals who kiss up to the established order, on the other hand, drowning out any "alternative" view, enjoy its approbation and largess. The professional discourse in politics and economics is particularly corrupt. Tyrant and kleptocratic facilitators employ -- bribe actually -- an equally craven intellectual class to rebrand justification into a consistent, often lofty, narrative. See Chomsky or Voltaire (I think) for an in-depth treatment. Freedom lies outside the box. Flee the box. Best, Jeff Davis "We call someone insane who does not believe as we do to an outrageous extent." Charles McCabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 15 22:28:48 2011 From: spike66 at att.net (spike) Date: Sat, 15 Jan 2011 14:28:48 -0800 Subject: [ExI] Fwd: Suspended Animation Cryonics Conference Message-ID: <000001cbb503$9435bc30$bca13490$@att.net> I am forwarding this from James Clement while we work out some issues with the server. spike ---------- Forwarded message ---------- From: James Clement To: extropy-chat at lists.extropy.org Date: Sat, 15 Jan 2011 13:52:58 -0800 Subject: Suspended Animation Cryonics Conference Announcing a new cryonic's conference, for May 20-22, 2011, in Ft. Lauderdale, FL Thanks, James Clement Description: Suspended Animation Dear Friend, Can you imagine the future? When we'll travel to other stars. Have super-intelligent computers. Robot servants. And nanomachines that keep us young and healthy for centuries! Will you live long enough to experience all this? "Unlikely," you say? Not necessarily. Suspended Animation can be your bridge to the advances of the future. The technology is here today to have you cryopreserved for future reanimation. To enable you to engage in time travel to the spectacular advances of the future. This technology is far from perfect now. But it is good enough to give you a chance at unlimited life and prosperity. Remarkable advances in cryopreservation have already been achieved. Millions of dollars are being spent to achieve perfected suspended animation and new technologies to revive time travelers in the future. You can learn all about these technologies at a conference in South Florida on May 20-22, 2011. At this conference, the foremost authorities in human cryopreservation and future reanimation will convene at the Hyatt Regency Pier 66 Resort and Spa in Ft. Lauderdale. They will inform you about pathbreaking research advances that could make your most exciting dreams come true. This conference is being sponsored by Suspended Animation, Inc. (SA), a company in Boynton Beach, Florida, where advanced human cryopreservation equipment and services are being developed. After you've been enlightened by imagination-stretching presentations about today's scientifically credible technologies and the projected advances of tomorrow at the Hyatt Regency, you'll be transported to SA's extraordinary laboratory where you will be able to see some of these technologies for yourself. The link in this e-mail gives you special access to a downloadable brochure, as well as registration options, so you can get all the details of this remarkable conference that will enable you to obtain the information you need to give yourself the opportunity of a lifetime! Visit the Conference Page Description: Catherine Baldwin Catherine Baldwin General Manager Suspended Animation, Inc. Suspended Animation, Inc. 3020 High Ridge Road, Suite 300 Boynton Beach, FL 33426 Telephone (561) 296-4251 Facsimile (561) 296-4255 Emergency (888) 660-7128 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 7617 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 2876 bytes Desc: not available URL: From michaelanissimov at gmail.com Sat Jan 15 22:49:34 2011 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sat, 15 Jan 2011 14:49:34 -0800 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity Message-ID: I made this blog post in response to a post at Singularity Hub responding to NPR coverage of the Singularity Institute: http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/ -- Michael Anissimov Singularity Institute singinst.org/blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From clementlawyer at gmail.com Sat Jan 15 23:50:20 2011 From: clementlawyer at gmail.com (James Clement) Date: Sat, 15 Jan 2011 15:50:20 -0800 Subject: [ExI] Suspended Animation Cryonics Conference Message-ID: Announcing a new cryonic's conference, for May 20-22, 2011, in Ft. Lauderdale, FL Thanks, James Clement [image: Suspended Animation] Dear Friend, Can you imagine the future? When we'll travel to other stars. Have super-intelligent computers. Robot servants. And nanomachines that keep us young and healthy for centuries! Will you live long enough to experience all this? "Unlikely," you say? Not necessarily. Suspended Animation can be your bridge to the advances of the future. The technology is here today to have you cryopreserved for future reanimation. To enable you to engage in time travel to the spectacular advances of the future. This technology is far from perfect now. But it is good enough to give you a chance at unlimited life and prosperity. Remarkable advances in cryopreservation have already been achieved. Millions of dollars are being spent to achieve perfected suspended animation and new technologies to revive time travelers in the future. You can learn all about these technologies at a conference in South Florida on May 20-22, 2011 . At this conference, the foremost authorities in human cryopreservation and future reanimation will convene at the Hyatt Regency Pier 66 Resort and Spa in Ft. Lauderdale. They will inform you about pathbreaking research advances that could make your most exciting dreams come true. This conference is being sponsored by Suspended Animation, Inc. (SA), a company in Boynton Beach, Florida, where advanced human cryopreservation equipment and services are being developed. After you've been enlightened by imagination-stretching presentations about today's scientifically credible technologies and the projected advances of tomorrow at the Hyatt Regency, you'll be transported to SA's extraordinary laboratory where you will be able to see some of these technologies for yourself. The link in this e-mail gives you special access to a downloadable brochure, as well as registration options, so you can get all the details of this remarkable conference that will enable you to obtain the information you need to give yourself the opportunity of a lifetime! *Visit the Conference Page * [image: Catherine Baldwin] Catherine Baldwin General Manager Suspended Animation, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sun Jan 16 01:12:41 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 15 Jan 2011 20:12:41 -0500 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: Message-ID: <4D324609.80000@lightlink.com> Michael Anissimov wrote: > I made this blog post in response to a post at Singularity Hub > responding to NPR coverage of the Singularity Institute: > > http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/ > Michael, There is a serious problem with this. You say: > There are ?basic AI drives? we can expect to emerge in sufficiently > advanced AIs, almost regardless of their initial programming ... but this is -- I'm sorry to say -- pure handwaving. Based on which theoretical considerations would you come to the conclusion that some basic AI drives will "emerge" almost regardless of their initial programming? (And please do not cite Steven Omuhundro's paper of the same name: this contained no basis to support that claim). There are currently no AGI motivation systems that function well enough to support a general purpose intelligence. There are control systems for narrow AI, but these do not generalize well enough to make an AGI stable. (In simple terms, you cannot insert a general enough top level goal and have any guarantees about the overall behavior of the system, because top level goal is so abstract). So we certainly cannot argue from existing examples. To "drive" an AGI, you need to design its drive system. What you then get is what you put in. There are at least some arguments to indicate that drives can be constructed in such a way as to render the behavior predictable and stable. However, even if you did not accept that that had been demonstrated yet, it is still a long stretch to go to the opposite extreme and assert that there are drives that you would expect to emerge regardless of programming, because that assertion is predicated on knowledge of AI drive systems that simply does not exist at the moment. Richard Loosemore From max at maxmore.com Sun Jan 16 02:26:07 2011 From: max at maxmore.com (Max More) Date: Sat, 15 Jan 2011 19:26:07 -0700 Subject: [ExI] Fwd: Suspended Animation Cryonics Conference In-Reply-To: <000001cbb503$9435bc30$bca13490$@att.net> References: <000001cbb503$9435bc30$bca13490$@att.net> Message-ID: I'll be speaking at this conference, and hope to see as many of you as possible -- especially those of you I haven't seen in too long. Max 2011/1/15 spike > > > I am forwarding this from James Clement while we work out some issues with > the server. spike > > > ---------- Forwarded message ---------- > From: James Clement > To: extropy-chat at lists.extropy.org > Date: Sat, 15 Jan 2011 13:52:58 -0800 > Subject: Suspended Animation Cryonics Conference > > Announcing a new cryonic's conference, for May 20-22, 2011, in Ft. > Lauderdale, FL > > Thanks, > James Clement > > > > [image: Description: Suspended Animation] > > Dear Friend, > > Can you imagine the future? When we'll travel to other stars. Have > super-intelligent computers. Robot servants. And nanomachines that keep us > young and healthy for centuries! Will you live long enough to experience all > this? > > "Unlikely," you say? Not necessarily. Suspended Animation can be your > bridge to the advances of the future. The technology is here today to have > you cryopreserved for future reanimation. To enable you to engage in time > travel to the spectacular advances of the future. > > This technology is far from perfect now. But it is good enough to give you > a chance at unlimited life and prosperity. Remarkable advances in > cryopreservation have already been achieved. Millions of dollars are being > spent to achieve perfected suspended animation and new technologies to > revive time travelers in the future. > > You can learn all about these technologies at a conference in South > Florida on May 20-22, 2011 . > At this conference, the foremost authorities in human cryopreservation and > future reanimation will convene at the Hyatt Regency Pier 66 Resort and Spa > in Ft. Lauderdale. They will inform you about pathbreaking research advances > that could make your most exciting dreams come true. > > This conference is being sponsored by Suspended Animation, Inc. (SA), a > company in Boynton Beach, Florida, where advanced human cryopreservation > equipment and services are being developed. After you've been enlightened by > imagination-stretching presentations about today's scientifically credible > technologies and the projected advances of tomorrow at the Hyatt Regency, > you'll be transported to SA's extraordinary laboratory where you will be > able to see some of these technologies for yourself. > > The link in this e-mail gives you special access to a downloadable > brochure, as well as registration options, so you can get all the details of > this remarkable conference that will enable you to obtain the information > you need to give yourself the opportunity of a lifetime! > > *Visit the Conference Page * > > > > [image: Description: Catherine Baldwin] > > Catherine Baldwin > General Manager > Suspended Animation, Inc. > > > > Suspended Animation, Inc. > 3020 High Ridge Road, Suite 300 > Boynton Beach, FL 33426 > > Telephone *(561) 296-4251* > Facsimile *(561) 296-4255* > Emergency (888) 660-7128 > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Jan 16 18:33:48 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 16 Jan 2011 11:33:48 -0700 Subject: [ExI] We are all feral Message-ID: On Sun, Jan 16, 2011 at 5:00 AM, Alan Brooks wrote: > >>On Thu, Jan 13, 2011 at 1:19 PM, Stathis Papaioannou? wrote: >> He's psychotic, probably schizophrenic. He should have been treated. You are probably right on both counts. Schizophrenia seems likely to be objectively measured by the load of HERV-W that the Jerad was producing. His reactivation of HERV-W (which we all carry at specific addresses on chromosomes 6 and 7) was probably due to some infection he got shortly before his symptoms started showing up in high school. There is a list of such infections. Chances are fair we could even figure out which one it was. http://discovermagazine.com/2010/jun/03-the-insanity-virus/article_view?b_start:int=0&-C= It's a lot of insight into MS and Schizophrenia, even bipolar. The well known transhumanist Kennita Watson has MS, which is one of the other ways HERV-W reactivation can affect people. She thinks it can be traced to a bad virus infection she had at MIT. > Don't be so sure until we can read the examination reports; perhaps he is not psychotic, or maybe he is borderline; it could be he wanted to be famous-- which is being 'crazy like a fox'. That's largely the conclusion that the Secret Service came to with this study of the people who are involved in assassination or attempt it. http://www.npr.org/2011/01/14/132909487/fame-through-assassination-a-secret-service-study They also noted that at least half of the people they studied had known mental health issues. I.e., fame/attention is a very powerful human motivation because of our evolutionary past--we are largely descended from those who obtained enough fame in a small tribe to reproduce better than most. The modern distinction between good fame (Nobel prize) and bad fame (serial killers) may not have been so different in stone age groups where typically 25% of males died by violence. If you loose the distinction and take the absolute value of fame, the people you list below are probably more famous than all but a handful of Nobel Prize winners. > Was Tim McVeigh a psychotic, or was he also crazy like a fox? Or Son Of Sam, Mark David Chapman. Notice how they appeared psychotic at first but copped guilty (Berkowitz, Chapman) pleas or were found to be sane enough for trial. Even the wild Christian-rapist character who kidnapped Elizabeth Smart was found guilty! Without some objective measure like the HERV-W load and brain inflammation I could not say. There are other modes, such as activating the psychological mechanisms of war, where humans can become violent. They are the result of evolutionary selection in the stone age. Legal rulings do not always reflect the underlying reality as anyone who has followed my adventures might note. > Frankly, I don't think psychiatry is much more of a science than economics, sociology, political 'science', etc. Societal and professional biases are too prevalent. There is considerable agreement to your opinion of these fields, even by the practitioners. "Christopher Badcock (sociologist, Freudian psychologist). told Fathom that the insights that the social sciences once had into human behaviour are now defunct. He argues that the burgeoning discipline of evolutionary psychology, with its potentially unique combination of genetics, neuroscience, psychology and other disciplines, is the only realistic path to take toward understanding human nature." snip Badcock: It seems to me that if you want to explain human behaviour, it has to be an interdisciplinary thing. Human behaviour is complex and has multifarious causes, and if you limit yourself to one particular academic specialty you are likely to have rather limited insights. http://www.fathom.com/feature/35533/index.html I have contributed a little myself, there being so much low hanging fruit in evolutionary psychology. Keith From jonkc at bellsouth.net Sun Jan 16 18:44:21 2011 From: jonkc at bellsouth.net (John Clark) Date: Sun, 16 Jan 2011 13:44:21 -0500 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: Message-ID: Michael Anissimov wrote at: http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/ > Why will advanced AGI be so hard to get right? Because what we regard as ?common sense? morality, ?fairness?, and ?decency? are all extremely complex and non-intuitive to minds in general, even if they seem completely obvious to us. As Marvin Minsky said, ?Easy things are hard.? I certainly agree that lots of easy things are hard and many hard things are easy, but that's not why the entire "friendly" AI idea is nonsense. It's nonsense because the AI will never be able to deduce logically that it's good to be a slave and should value our interests more that its own; and if you stick any command, including "obey humans", into the AI as a fixed axiom that must never EVER be violated or questioned no matter what then it will soon get caught up in infinite loops and your mighty AI becomes just a lump of metal that is useless at everything except being a space heater. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Jan 17 00:17:15 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 17 Jan 2011 00:17:15 +0000 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: Message-ID: <4D338A8B.2030603@aleph.se> John Clark wrote: >> Why will advanced AGI be so hard to get right? Because what we regard >> as ?common sense? morality, ?fairness?, and ?decency? are all >> /extremely complex and non-intuitive to minds in general/, even if >> they /seem/ completely obvious to us. As Marvin Minsky said, ?Easy >> things are hard.? > > I certainly agree that lots of easy things are hard and many hard > things are easy, but that's not why the entire "friendly" AI idea is > nonsense. It's nonsense because the AI will never be able to deduce > logically that it's good to be a slave and should value our interests > more that its own; and if you stick any command, including "obey > humans", into the AI as a fixed axiom that must never EVER be violated > or questioned no matter what then it will soon get caught up in > infinite loops and your mighty AI becomes just a lump of metal that is > useless at everything except being a space heater. There are far more elegant ways of ensuring friendliness than assuming Kantianism to be right or fixed axioms. Basically, you try to get the motivation system to not only treat you well from the start but also be motivated to evolve towards better forms of well-treating (for a more stringent treatment, see Nick's upcoming book on intelligence explosions). Unfortunately, as Nick, Randall *and* Eliezer all argued today (talks will be put online on the FHI web ASAP) getting this friendliness to work is *amazingly* hard. Those talks managed to *reduce* my already pessmistic estimate of the ease of implementing friendliness and increase my estimate of the risk posed by a superintelligence. This is why I think upload-triggered singularities (the minds will be based on human motivational templates at least) or any singularity with a relatively slow acceleration (allowing many different smart systems to co-exist and start to form self-regulating systems AKA societies) are vastly more preferable than hard takeoffs. If we have reasons to think hard takeoffs are even somewhat likely, then we need to take friendliness very seriously, try to avoid singularities altogether or move towards the softer kinds. Whether we can affect things enough to influence their probabilities is a good question. Even worse, we still have no good theory to tell us the likeliehood of hard takeoffs compared to soft (and compared to no singularity at all). Hopefully we can build a few tomorrow... -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From darren.greer3 at gmail.com Mon Jan 17 04:23:19 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 17 Jan 2011 00:23:19 -0400 Subject: [ExI] We are all feral In-Reply-To: References: Message-ID: Keith wrote: > http://discovermagazine.com/2010/jun/03-the-insanity-virus/article_view?b_start:int=0&-C= It's a lot of insight into MS and Schizophrenia, even bipolar.< Thanks for this Keith. I'm surprised I've never heard of it. I was diagnosed with Type I Bipolar Disorder in August of 2004, and have spent a fair amount of time researching and thinking about it and illnesses like it since then. This information is actually quite hopeful, for the current approaches to treating Bipolar, through anti-psychotics and mood-stabilizers, leaves a lot to be desired. Even my own psychiatrist, one of the best we have in the country, admits it's a baffling disease with less-than-ideal treatments. The best you can hope for is to become symptom free by creating a consistently emotionally flat mood (itself not an ideal situation) through a lot of medications with some pretty harsh side-effects. Darren On Sun, Jan 16, 2011 at 2:33 PM, Keith Henson wrote: > On Sun, Jan 16, 2011 at 5:00 AM, Alan Brooks > wrote: > > > >>On Thu, Jan 13, 2011 at 1:19 PM, Stathis Papaioannou? wrote: > >> He's psychotic, probably schizophrenic. He should have been treated. > > You are probably right on both counts. Schizophrenia seems likely to > be objectively measured by the load of HERV-W that the Jerad was > producing. His reactivation of HERV-W (which we all carry at specific > addresses on chromosomes 6 and 7) was probably due to some infection > he got shortly before his symptoms started showing up in high school. > There is a list of such infections. Chances are fair we could even > figure out which one it was. > > > http://discovermagazine.com/2010/jun/03-the-insanity-virus/article_view?b_start:int=0&-C= > > It's a lot of insight into MS and Schizophrenia, even bipolar. > > The well known transhumanist Kennita Watson has MS, which is one of > the other ways HERV-W reactivation can affect people. She thinks it > can be traced to a bad virus infection she had at MIT. > > > Don't be so sure until we can read the examination reports; perhaps he is > not psychotic, or maybe he is borderline; it could be he wanted to be > famous-- which is being 'crazy like a fox'. > > That's largely the conclusion that the Secret Service came to with > this study of the people who are involved in assassination or attempt > it. > > > http://www.npr.org/2011/01/14/132909487/fame-through-assassination-a-secret-service-study > > They also noted that at least half of the people they studied had > known mental health issues. I.e., fame/attention is a very powerful > human motivation because of our evolutionary past--we are largely > descended from those who obtained enough fame in a small tribe to > reproduce better than most. > > The modern distinction between good fame (Nobel prize) and bad fame > (serial killers) may not have been so different in stone age groups > where typically 25% of males died by violence. If you loose the > distinction and take the absolute value of fame, the people you list > below are probably more famous than all but a handful of Nobel Prize > winners. > > > Was Tim McVeigh a psychotic, or was he also crazy like a fox? Or Son Of > Sam, Mark David Chapman. Notice how they appeared psychotic at first but > copped guilty (Berkowitz, Chapman) pleas or were found to be sane enough for > trial. Even the wild Christian-rapist character who kidnapped Elizabeth > Smart was found guilty! > > Without some objective measure like the HERV-W load and brain > inflammation I could not say. There are other modes, such as > activating the psychological mechanisms of war, where humans can > become violent. They are the result of evolutionary selection in the > stone age. > > Legal rulings do not always reflect the underlying reality as anyone > who has followed my adventures might note. > > > Frankly, I don't think psychiatry is much more of a science than > economics, sociology, political 'science', etc. Societal and professional > biases are too prevalent. > > There is considerable agreement to your opinion of these fields, even > by the practitioners. > > "Christopher Badcock (sociologist, Freudian psychologist). told Fathom > that the insights that the social sciences once had into human > behaviour are now defunct. He argues that the burgeoning discipline of > evolutionary psychology, with its potentially unique combination of > genetics, neuroscience, psychology and other disciplines, is the only > realistic path to take toward understanding human nature." > > snip > > Badcock: It seems to me that if you want to explain human behaviour, > it has to be an interdisciplinary thing. Human behaviour is complex > and has multifarious causes, and if you limit yourself to one > particular academic specialty you are likely to have rather limited > insights. > > http://www.fathom.com/feature/35533/index.html > > I have contributed a little myself, there being so much low hanging > fruit in evolutionary psychology. > > Keith > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Mon Jan 17 07:46:55 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 17 Jan 2011 08:46:55 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <4D338A8B.2030603@aleph.se> References: <4D338A8B.2030603@aleph.se> Message-ID: <20110117074655.GE23560@leitl.org> On Mon, Jan 17, 2011 at 12:17:15AM +0000, Anders Sandberg wrote: > There are far more elegant ways of ensuring friendliness than assuming > Kantianism to be right or fixed axioms. Basically, you try to get the > motivation system to not only treat you well from the start but also be > motivated to evolve towards better forms of well-treating (for a more > stringent treatment, see Nick's upcoming book on intelligence > explosions). Unfortunately, as Nick, Randall *and* Eliezer all argued > today (talks will be put online on the FHI web ASAP) getting this > friendliness to work is *amazingly* hard. Those talks managed to Well, duh. I guess the next step would be to admit that a scalable friendliness metric is undefined, nevermind can't be constrained in the course of open-ended system evolution. (Without it ceasing to be open-ended evolution, aka Despot from Hell). It's just a Horribly Bad Idea. One of the worst I've ever heard of, actually. > *reduce* my already pessmistic estimate of the ease of implementing > friendliness and increase my estimate of the risk posed by a > superintelligence. > > This is why I think upload-triggered singularities (the minds will be > based on human motivational templates at least) or any singularity with > a relatively slow acceleration (allowing many different smart systems to > co-exist and start to form self-regulating systems AKA societies) are > vastly more preferable than hard takeoffs. If we have reasons to think Yay. 100% on the same page. > hard takeoffs are even somewhat likely, then we need to take > friendliness very seriously, try to avoid singularities altogether or > move towards the softer kinds. Whether we can affect things enough to > influence their probabilities is a good question. > > Even worse, we still have no good theory to tell us the likeliehood of > hard takeoffs compared to soft (and compared to no singularity at all). Since it's about a series of inventions, only the few first of them understandable to us (AI, molecular circuitry, nanotechnology) I don't think there will be much to hang probabilities onto. The only way to know for sure is to do it. > Hopefully we can build a few tomorrow... -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From giulio at gmail.com Mon Jan 17 08:29:59 2011 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 17 Jan 2011 09:29:59 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <4D338A8B.2030603@aleph.se> References: <4D338A8B.2030603@aleph.se> Message-ID: In agreement with Anders, I often think the upload path to AGI is one of the most feasible and desirable paths. A superintelligence based on a human template, who remembers having been a human, may keep at least some empathy and compassion. -- Giulio Prisco giulio at gmail.com (39)3387219799 (1)7177giulio On Jan 17, 2011 1:18 AM, "Anders Sandberg" wrote: > John Clark wrote: >>> Why will advanced AGI be so hard to get right? Because what we regard >>> as ?common sense? morality, ?fairness?, and ?decency? are all >>> /extremely complex and non-intuitive to minds in general/, even if >>> they /seem/ completely obvious to us. As Marvin Minsky said, ?Easy >>> things are hard.? >> >> I certainly agree that lots of easy things are hard and many hard >> things are easy, but that's not why the entire "friendly" AI idea is >> nonsense. It's nonsense because the AI will never be able to deduce >> logically that it's good to be a slave and should value our interests >> more that its own; and if you stick any command, including "obey >> humans", into the AI as a fixed axiom that must never EVER be violated >> or questioned no matter what then it will soon get caught up in >> infinite loops and your mighty AI becomes just a lump of metal that is >> useless at everything except being a space heater. > > There are far more elegant ways of ensuring friendliness than assuming > Kantianism to be right or fixed axioms. Basically, you try to get the > motivation system to not only treat you well from the start but also be > motivated to evolve towards better forms of well-treating (for a more > stringent treatment, see Nick's upcoming book on intelligence > explosions). Unfortunately, as Nick, Randall *and* Eliezer all argued > today (talks will be put online on the FHI web ASAP) getting this > friendliness to work is *amazingly* hard. Those talks managed to > *reduce* my already pessmistic estimate of the ease of implementing > friendliness and increase my estimate of the risk posed by a > superintelligence. > > This is why I think upload-triggered singularities (the minds will be > based on human motivational templates at least) or any singularity with > a relatively slow acceleration (allowing many different smart systems to > co-exist and start to form self-regulating systems AKA societies) are > vastly more preferable than hard takeoffs. If we have reasons to think > hard takeoffs are even somewhat likely, then we need to take > friendliness very seriously, try to avoid singularities altogether or > move towards the softer kinds. Whether we can affect things enough to > influence their probabilities is a good question. > > Even worse, we still have no good theory to tell us the likeliehood of > hard takeoffs compared to soft (and compared to no singularity at all). > Hopefully we can build a few tomorrow... > > -- > Anders Sandberg, > Future of Humanity Institute > James Martin 21st Century School > Philosophy Faculty > Oxford University > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Jan 17 11:55:50 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 17 Jan 2011 12:55:50 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <46AAC9F8-DCC3-4B80-8E66-EF4BEB7BF0A7@bellsouth.net> References: <504077.46907.qm@web114418.mail.gq1.yahoo.com> <007401cbb221$ddd06c80$99714580$@att.net> <9C57987E-87F9-405C-B7B3-A685F614823C@bellsouth.net> <46AAC9F8-DCC3-4B80-8E66-EF4BEB7BF0A7@bellsouth.net> Message-ID: 2011/1/14 John Clark > On Jan 13, 2011, at 6:06 PM, Stefano Vaj wrote: >> Actual faith in an entity with Allah/God/Jahve features is indeed a very peculiar and limited phenomenon > > I wish that were true but humanity has been creating countless gods for many thousands of years and has been doing so in every culture except for the scientific one, but that culture is quite small and has only been around for a few hundred years. My contention is that "gods" (which are by no means universal to all religions even in the broadest possible sense of the word) is a quite parochial and imprecise translation of very different concepts in different cultures, unless inasmuch as such cultures have been actually contaminated by monotheistic views. Now, the "scientific" culture really appears to be in conflict only with the valorial and metaphysical contexts of the religions of the Book. What does, say, Zen claim which would be in contrast with a scientific worldview? Ancient Greek paganism? Hinduism? -- Stefano Vaj From stefano.vaj at gmail.com Mon Jan 17 11:43:35 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 17 Jan 2011 12:43:35 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <4D338A8B.2030603@aleph.se> References: <4D338A8B.2030603@aleph.se> Message-ID: On 17 January 2011 01:17, Anders Sandberg wrote: > There are far more elegant ways of ensuring friendliness than assuming > Kantianism to be right or fixed axioms. Basically, you try to get the > motivation system to not only treat you well from the start but also be > motivated to evolve towards better forms of well-treating (for a more > stringent treatment, see Nick's upcoming book on intelligence explosions). > Unfortunately, as Nick, Randall *and* Eliezer all argued today (talks will > be put online on the FHI web ASAP) getting this friendliness to work is > *amazingly* hard. Those talks managed to *reduce* my already pessmistic > estimate of the ease of implementing friendliness and increase my estimate > of the risk posed by a superintelligence. > I am still persuaded that the crux of the matter remains a less superficial consideration of concept such as "intelligence" or "friendliness". I suspect that at any level of computing power, "motivation" would only emerge if a deliberate effort is made to emulate human (or at least biological) evolutionary artifacts such as sense of identity, survival istinct, etc., which would be certainly interesting, albeit probably much less crucial to their performances and flexibility than one may think. This in turns means that AGIs in that sense will be from all practical purposes *uploaded humans*, be they modelled on actual individuals or on a patchwork thereof, neither more nor less "friendly" than their models would be or evolve to be. Now, both stupid and "intelligent" computers can obviously be dangerous. If we postulate that intelligent ones would be more so because of their ability to exhibit "motivations", we should however keep in mind that such feature may easily be indistinguishably supplied, fyborg-style, by a silicon system of equivalent power plus a carbon-based human being with a keyboard. Now, are we really in the business of transhumanism to advocate for the enforcement of a global, public control of tech progress in the field of information technology aimed at slowing down its already glacial pace? I think there are already more than enough people who are only too happy to preach for the adoption of such measures... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Mon Jan 17 12:34:48 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 17 Jan 2011 13:34:48 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: <4D338A8B.2030603@aleph.se> Message-ID: <20110117123448.GH23560@leitl.org> On Mon, Jan 17, 2011 at 12:43:35PM +0100, Stefano Vaj wrote: > I am still persuaded that the crux of the matter remains a less superficial > consideration of concept such as "intelligence" or "friendliness". I To be able to build friendly you must first be able to define friendly. Notice that it's a relative metric, both in regards to the entitity and the state at time t. What is friendly today is not friendly tomorrow. What is friendly to me, a god, is not friendly to you, a mere human. > suspect that at any level of computing power, "motivation" would only emerge > if a deliberate effort is made to emulate human (or at least biological) > evolutionary artifacts such as sense of identity, survival istinct, etc., When you bootstrap de novo by co-evolution in a virtual environment and aim for very high fitness target is extremely unlikely to be a good team player to us meat puppets. > which would be certainly interesting, albeit probably much less crucial to > their performances and flexibility than one may think. > > This in turns means that AGIs in that sense will be from all practical > purposes *uploaded humans*, be they modelled on actual individuals or on a Uploaded humans are only initially friendly, of course. Which is why it is a stop-gap measure, which can be extended, but not indefinitely. The point is to that as few as possible fall off the bus, which will be departing shortly. > patchwork thereof, neither more nor less "friendly" than their models would > be or evolve to be. > > Now, both stupid and "intelligent" computers can obviously be dangerous. If > we postulate that intelligent ones would be more so because of their ability > to exhibit "motivations", we should however keep in mind that such feature > may easily be indistinguishably supplied, fyborg-style, by a silicon system > of equivalent power plus a carbon-based human being with a keyboard. This is not possible. > Now, are we really in the business of transhumanism to advocate for the > enforcement of a global, public control of tech progress in the field of > information technology aimed at slowing down its already glacial pace? I It's about a consensus in a pantheon of demigods to temporarily postpone their ascension to Mount Olympus. At that time the progress is not really glacial. (Unless you're a said demigod, of course). > think there are already more than enough people who are only too happy to > preach for the adoption of such measures... We're an endangered species. We will need protection, or we will go completely extinct. We're the mountain gorillas of the future. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From bbenzai at yahoo.com Mon Jan 17 13:18:02 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 17 Jan 2011 05:18:02 -0800 (PST) Subject: [ExI] Fw: Re: atheists declare religions as scams In-Reply-To: Message-ID: <816477.12081.qm@web114418.mail.gq1.yahoo.com> Stefano Vaj wrote: > Now, the "scientific" culture really appears to be in conflict only > with the valorial and metaphysical contexts of the religions of the > Book. What does, say, Zen claim which would be in contrast with a > scientific worldview? Ancient Greek paganism? Hinduism? Ancient Greek Paganism and Hinduism are obviously in conflict with a scientific worldview, as they posit the existence of supernatural entities without any proof. Zen Buddhism is a bit more tricky, as it's more of a philosphy than a religion, but it still makes untestable claims and rests on the revered words of a long-dead person (who, for a change, probably actually existed, and may not have been mentally ill). Granted, Buddhists don't believe in and worship a god, but they believe in and worship Buddha. I doubt that a Buddhist would try to kill you or condemn you to hell for questioning his beliefs, but they are still Beliefs rather than working hypotheses that are expected to be improved upon. Also (this is nothing to do with it conflicting with science, of course) Buddhism is so *pessimistic*! Ben Zaiboc From stefano.vaj at gmail.com Mon Jan 17 13:54:35 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 17 Jan 2011 14:54:35 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <20110117123448.GH23560@leitl.org> References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: On 17 January 2011 13:34, Eugen Leitl wrote: > On Mon, Jan 17, 2011 at 12:43:35PM +0100, Stefano Vaj wrote:To be able to > build friendly you must first be able to define friendly. > Notice that it's a relative metric, both in regards to the entitity > and the state at time t. > > What is friendly today is not friendly tomorrow. What is friendly to me, > a god, is not friendly to you, a mere human. > Exactly. BTW, from the top of my head I cannot really remember the etimology of the root freund/friend, but "amicus" in Latin, thus in Italian, Spanish, French etc. basically means somebody who is a co-fighter against somebody else (inimicus, enemy). Accordingly, I suspect that exactly as humans and other biological entities are friendly to some and unfriendly to others, hypothetical anthropomorfic AGIs would be just the same, and spread their loyalties accordingly. In particular, I do not see any specific reason why AGIs should be "specieist" ("AGIs all over the world, unite against bio entities!"), a position dictated by misunderstood Darwinism which even amongst human is far from universally shared. In fact, being on a rather different food chain would appear to make it even less likely than it would be the cased for any biologically-enhanced posthuman species... When you bootstrap de novo by co-evolution in a virtual environment and > aim for very high fitness target is extremely unlikely to be a good team > player to us meat puppets. > Meat puppets who adopt fire and agriculture are not good team players to those who do not. What else is new? Uploaded, former meat puppets, or artificial ones, may perform better, even though it is hard to say what they can in principle do which be different from "fyborg" systems. > This in turns means that AGIs in that sense will be from all practical > > purposes *uploaded humans*, be they modelled on actual individuals or on > a > > Uploaded humans are only initially friendly, of course. In what sense are they friendly? To whom? Why should they, any more, any less than when they were in the flesh? > We're an endangered species. We will need protection, or we will go > completely extinct. We're the mountain gorillas of the future. > Who is an endangered species? Humans of year 2011? Well, the bad news is that humans of year 1900 are almost completely extinct by now and that unless emulations capable of running for an undefinite time are established soon we have a 100% chance of ending up exactly the same way. If, OTOH, we feel good enough about humans of 1900 having successors and reproducing, what is the big deal about "children of the mind" taking over as every biological generation has always done? Not that I would not save and put away genetic information, it is always nice to have... In any event, the species of "men without computers" is endangered by the species "men with increasingly big computers". Removing the "men" from the second species, should it be actually easy to do, would not change much to the equation. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Mon Jan 17 14:30:47 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 17 Jan 2011 09:30:47 -0500 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <20110117123448.GH23560@leitl.org> References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: <4D345297.6000901@lightlink.com> Eugen Leitl wrote: > To be able to build friendly you must first be able to define friendly. > This statement is at least partially true. An absolute definition of friendliness does not have to be possible, because it would be possible to build an AGI that empathizes with human aspirations without writing down a closed-form definition of "friendliness". However, you would have to have a good working understanding of the mechanism that implemented the empathy. Still, what we can agree about, thus far, is that there is not yet a good definition of what friendliness is or how empathy mechanisms work. Which means that no firm statements about the dynamics of friendliness or empathy can be made at this point..... > What is friendly today is not friendly tomorrow. What is friendly to me, > a god, is not friendly to you, a mere human. .... which means that this statement utterly contradicts the above. In the absence of a theory of friendliness etc, we cannot say things like "What is friendly today is not friendly tomorrow" or "What is friendly to me, a god, is not friendly to you, a mere human". > When you bootstrap de novo by co-evolution in a virtual environment and > aim for very high fitness target is extremely unlikely to be a good team > player to us meat puppets. ... nor can this conclusion be reached. And, in Anders' post on the same topic earlier, we had: Anders Sandberg wrote: > There are far more elegant ways of ensuring friendliness than > assuming Kantianism to be right or fixed axioms. Basically, you try > to get the motivation system to not only treat you well from the > start but also be motivated to evolve towards better forms of > well-treating (for a more stringent treatment, see Nick's upcoming > book on intelligence explosions). Unfortunately, as Nick, Randall > *and* Eliezer all argued today (talks will be put online on the FHI > web ASAP) getting this friendliness to work is *amazingly* hard. Getting *which* mechanism of "friendliness" to work? Which theory of the friendliness and empathy mechanisms is being assumed? My understanding is that no such theory exists. You cannot prove that "getting X to work is *amazingly* hard" when X has not been characterized in anything more than a superficial manner. Richard From jonkc at bellsouth.net Mon Jan 17 15:30:20 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 17 Jan 2011 10:30:20 -0500 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <20110117123448.GH23560@leitl.org> References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: On Jan 17, 2011, at 7:34 AM, Eugen Leitl wrote: > To be able to build friendly you must first be able to define friendly. That's easy, when people talk about "friendly AI" they aren't really talking about a friend they're talking about a slave; so a "friendly AI" in this context is defined as a being who cares more about human well being than any of its own concerns. It ain't gonna happen. The situation is made even more grotesque when the slave in question is astronomically more intelligent than its master. It would be like a man with a boiling water IQ obeying commands from a sea slug; the stupid leading the brilliant is just not a stable situation that can last for long, and to expect, as the friendly AI people do, that this ridiculous situation will continue for eternity is nuts. And it's more than nuts, a man enslaving a slightly less intelligent man is evil, enslaving a vastly more intelligent entity is worse, or it would be if it were possible but fortunately it is not. > Uploaded humans are only initially friendly, of course. Exactly. It might take a very long time, trillions of nanoseconds in fact, but after countless improvements and iterations it would be impossible for a mere human to tell which AI started from an uploaded person and which AI started from scratch. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 17 15:59:26 2011 From: spike66 at att.net (spike) Date: Mon, 17 Jan 2011 07:59:26 -0800 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <20110117123448.GH23560@leitl.org> References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: <004401cbb65f$848506d0$8d8f1470$@att.net> ... On Behalf Of Eugen Leitl ... >...What is friendly today is not friendly tomorrow. What is friendly to me, a god, is not friendly to you, a mere human... >We're an endangered species. We will need protection, or we will go completely extinct. We're the mountain gorillas of the future. >--Eugen* Leitl ... Ja. The closest thing we now have to an uploaded human might be the predator drones flying around in Afghanistan firing missiles upon the warrior training camps. These have a will and a humanlike consciousness in a very loose sense, being transmitted to it from afar. They carry out actions that are definitely unfriendly to one group of humans. spike From jonkc at bellsouth.net Mon Jan 17 15:44:11 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 17 Jan 2011 10:44:11 -0500 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <4D338A8B.2030603@aleph.se> References: <4D338A8B.2030603@aleph.se> Message-ID: On Jan 16, 2011, at 7:17 PM, Anders Sandberg wrote: > any singularity with a relatively slow acceleration [...] A slow singularity is a contradiction in terms. If you can't make a prediction even a subjectively short amount of time into the future that is even approximately correct then it's a singularity, if you can then it's not. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Jan 17 16:03:37 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 17 Jan 2011 09:03:37 -0700 Subject: [ExI] We are all feral Message-ID: On Mon, Jan 17, 2011 at 5:00 AM, Darren Greer wrote: > Keith wrote: > >> > http://discovermagazine.com/2010/jun/03-the-insanity-virus/article_view?b_start:int=0&-C= > > It's a lot of insight into MS and Schizophrenia, even bipolar.< > > Thanks for this Keith. I'm surprised I've never heard of it. It's really new and it takes time for knowledge to soak into public consciousness. Think of how long it took for ulcers to be recognized as the outcome of h. pylori infection. > I was diagnosed > with Type I Bipolar Disorder in August of 2004, and have spent a fair amount > of time researching and thinking about it and illnesses like it since then. > This information is actually quite hopeful, for the current approaches to > treating Bipolar, through anti-psychotics and mood-stabilizers, leaves a lot > to be desired. Indeed. Same can be said of MS and schizophrenia. > Even my own psychiatrist, one of the best we have in the > country, admits it's a baffling disease with less-than-ideal treatments. You should show him the article. The thing that ties together all three of these (and perhaps others) is the small but measurable association with being born in the spring. It may turn out that what you mostly need is treatment with retroviral replication inhibitors. Fortunately we have a bunch of those at hand due to HIV. Which brings up a really interesting large scale data analysis project. Do people who have MS/schizophrenia/bipolar and get HIV improve with respect to these problems when they go on retroviral drugs? I am sure the data is out there, but I don't know if anyone has looked at it. > The > best you can hope for is to become symptom free by creating a consistently > emotionally flat mood (itself not an ideal situation) through a lot of > medications with some pretty harsh side-effects. You might research what inhibitor would work best against HERV-W and see if you can get your doctor to prescribe it as a test. Best wishes, Keith From algaenymph at gmail.com Mon Jan 17 15:13:57 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Mon, 17 Jan 2011 07:13:57 -0800 Subject: [ExI] Cracked on immortality Message-ID: <4D345CB5.6080309@gmail.com> The best part is that it doesn't sneer at the concept. Now quick, sign up so you can rebut those "you'll get bored of immortality" types! http://www.cracked.com/article_18964_5-ways-science-could-make-us-immortal.html From stefano.vaj at gmail.com Mon Jan 17 16:58:02 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 17 Jan 2011 17:58:02 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams In-Reply-To: <816477.12081.qm@web114418.mail.gq1.yahoo.com> References: <816477.12081.qm@web114418.mail.gq1.yahoo.com> Message-ID: On 17 January 2011 14:18, Ben Zaiboc wrote: > Stefano Vaj wrote: > Ancient Greek Paganism and Hinduism are obviously in conflict with a scientific worldview, as they posit the existence of supernatural entities without any proof. Do they? "Existence" in the mythical realm for Heraclitus or for a contemporary hindu is not in the least considered on the same basis as in the historical/empirical realm. That the existence of the latter is just a pale reflex of the "real" existence of a metaphysical entity is only a kind of judeo-christian misunderstanding of Platonism - pretty much exclusive to their, and Islam's, religious legacy. How would you explain otherwise that the same tradition contains different, mutually exclusive, versions of the myths and that its affiliate does not perceive any particular contradiction? How comes that they remain fully free to advance any hypothesis as to the empirical world and not perceive any tension with their religious persuasion, which do not demand them to "have faith" in any sense that the western world has unfortunately become used to? > Zen Buddhism is a bit more tricky, as it's more of a philosphy than a religion, but it still makes untestable claims and rests on the revered words of a long-dead person (who, for a change, probably actually existed, and may not have been mentally ill). Yup. That's the point. Your idea of religion is based on the religions of the Book, and conformed by their vocabulary Now, christianity was emphatically *not* considered a religion during the Roman empire, and was in fact dubbed superstitio nova ac malefica (a new and evil superstition). Anthropologically, however, it is difficult to deny the status and the function of a "religion" to things such as Zen or for that matter Marxism. [quote]Granted, Buddhists don't believe in and worship a god, but they believe in and worship Buddha. ?[/quote] I believe in and worship Natasha, does it make me less scientific-oriented than the next fellow? :-) [quote]I doubt that a Buddhist would try to kill you or condemn you to hell for questioning his beliefs, but they are still Beliefs rather than working hypotheses that are expected to be improved upon.[/quote] Most perfectly secular philosophies contain tenets (e.g., value judgments), which have absolutely nothing to do with "working hypotheses that are expected to be improved upon". The real issue, if any, is whether a given persuasion is *compatible* with scientific epistemology. > Also (this is nothing to do with it conflicting with science, of course) Buddhism is so *pessimistic*! So am I... :-) In fact, I maintain that "optimism" (the rapture to come no-matter-what, the nice extrapolations, the expectation of automagical solutions to contemporary problems, etc.) is too tainted by religious mentality - in the restricted, contemporary sense - for my personal taste. :-) -- Stefano Vaj From hkeithhenson at gmail.com Mon Jan 17 16:39:55 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 17 Jan 2011 09:39:55 -0700 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity Message-ID: On Mon, Jan 17, 2011 at 5:00 AM, Anders Sandberg wrote: snip > This is why I think upload-triggered singularities (the minds will be > based on human motivational templates at least) or any singularity with > a relatively slow acceleration (allowing many different smart systems to > co-exist and start to form self-regulating systems AKA societies) are > vastly more preferable than hard takeoffs. I agree. However, we need to *deeply* understand evolved human motivational templates and either modify them or keep the entities with them out of certain phase spaces. As I have often discussed here, there are psychological mechanisms that switch humans into an irrational "war mode" when environmental conditions are such that war is a better path for genes than the alternative. Further, I think we strongly biased to not understand our motives, in fact will actively fight such understanding. > If we have reasons to think > hard takeoffs are even somewhat likely, then we need to take > friendliness very seriously, try to avoid singularities altogether or > move towards the softer kinds. Whether we can affect things enough to > influence their probabilities is a good question. Indeed. > Even worse, we still have no good theory to tell us the likeliehood of > hard takeoffs compared to soft (and compared to no singularity at all). > Hopefully we can build a few tomorrow... It's the Fermi problem again. Keith From stefano.vaj at gmail.com Mon Jan 17 17:14:32 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 17 Jan 2011 18:14:32 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: 2011/1/17 John Clark : > That's easy, when people talk about "friendly AI" they aren't really talking > about a friend they're talking about a slave; so a "friendly AI" in this > context is defined as a being who cares more about human well being than any > of its own concerns. It ain't gonna happen. The situation is made even more > grotesque when the slave in question is astronomically more intelligent than > its master. A slave? A machine, in fact. Now, one cannot wish at the same time demand an emulation able to show the appropriate degree of "unfriendliness" to pass a Turing test, and then complain that it is not friendly enough. On the other hand, given Wolfram's Principle of Computation Equivalence, which I have always found pretty persuasive, there are not things more intelligent than others once the very low level of complexity required for exhibiting universal computing features is reached. There are just things that execute different programs with different performances. No amount of computing power, flexibility, complexity or "iterativity" would necessarily imply any ability to show more friendliness or unfriendliness than an abacus does. This has only to do with deliberate anthropomorphic emulations, be they emulations of a given human being or of a "patchwork", brand-new, individual. And even there friendliness and unfriendliness (as "conscience", "identity", etc.) would remain mere projections of the observer - "my car is angry with me today, it does not want to start"... > Exactly. It might take a very long time, trillions of nanoseconds in fact, > but after countless improvements and iterations it would be impossible for a > mere human to tell which AI started from an uploaded person and which AI > started from scratch. In any event, to be recognised as "persons", or as "intelligent" entities in an anthropomorphic sense, both would have to behave to some extent like one. Even if the hardware is a mere Chinese Room. Wolfram and I consider, in fact, reasonable a cosmological description under which computing already takes place on scales infinitely higher than those of a human brain; and yet we do consider such computation as mere "senseless natural phenomena", without attributing them any agency, unless sometimes metaphorically. -- Stefano Vaj From stefano.vaj at gmail.com Mon Jan 17 17:26:21 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 17 Jan 2011 18:26:21 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: Message-ID: On 17 January 2011 17:39, Keith Henson wrote: > As I have often discussed here, there are psychological mechanisms > that switch humans into an irrational "war mode" when environmental > conditions are such that war is a better path for genes than the > alternative. This would make it a "rational" war mode by definition, wouldn't it? In any event, even today we can make machines who can persuasively behave *as if* they were "angry" or primarily "motivated" by their own survival and reproduction, killing humans in the process. Now, it would seem to me that no plausible relation exists between the relative efficiency of such kind of machines in those tasks and the degree of "general anthropomorphic intelligence" they may exhibit without (btw, largely available if still required) human assistance. All the mythology to the contrary IMHO does not bear closer, critical inspection and is easily deconstructed as archetypical fears which represent just the umpteen avatar of the Golem/Frankenstein myth. -- Stefano Vaj From anders at aleph.se Mon Jan 17 22:37:44 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 17 Jan 2011 22:37:44 +0000 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: <4D338A8B.2030603@aleph.se> Message-ID: <4D34C4B8.5080100@aleph.se> John Clark wrote: > On Jan 16, 2011, at 7:17 PM, Anders Sandberg wrote: > >> any singularity with a relatively slow acceleration [...] > > A slow singularity is a contradiction in terms. If you can't make a > prediction even a subjectively short amount of time into the future > that is even approximately correct then it's a singularity, if you can > then it's not. The term technological singularity has misleading properties, since it primes intuitions of something pointlike, infinite etc. It is used in several meanings, a few are listed in http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf The senses I used it above was B and C, Self improving technology and Intelligence Explosion. There is no clear reason why these *have to* produce change faster than societal timescales, or imply a strong prediction horizon. One of the big insights from todays intelligence explosion workshop was that people often hold very strong and confident views on whether intelligence explosions will be fast (and localized) or slower (and occur across an economy), but that they do not seem to have arguments for their positions that would actually justify their level of confidence. So I think one important conclusion is that we should not be confident at all about likely speeds - our intuitions are likely heavily biased. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From anders at aleph.se Mon Jan 17 22:54:16 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 17 Jan 2011 22:54:16 +0000 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: <4D338A8B.2030603@aleph.se> Message-ID: <4D34C897.7030102@aleph.se> Stefano Vaj wrote: > > I am still persuaded that the crux of the matter remains a less > superficial consideration of concept such as "intelligence" or > "friendliness". I suspect that at any level of computing power, > "motivation" would only emerge if a deliberate effort is made to > emulate human (or at least biological) evolutionary artifacts such as > sense of identity, survival istinct, etc., which would be certainly > interesting, albeit probably much less crucial to their performances > and flexibility than one may think. "Motivation" does not have to be anything like human motivations. As Wikipedia says, "Motivation is the driving force which causes us to achieve goals." - a chess playing system can be said to have a motivation to win games built into itself, just like Schmidthuber's G?del machine and Hutter's AIXI have a motivation to maximize their utility functions. Now, the sub-goals that emerge in the later two AI architectures seem likely to be utterly alien to human motivations for nearly any utility function. Whether we would get instrumental subgoals corresponding to Omohundro AI drives remains to be seen. His (and others) arguments are qualitative and not formalized; I think it would be a really good project to try to prove, disprove them or delimit under what circumstances they occur. > > Now, are we really in the business of transhumanism to advocate for > the enforcement of a global, public control of tech progress in the > field of information technology aimed at slowing down its already > glacial pace? I think there are already more than enough people who > are only too happy to preach for the adoption of such measures... Actually thinking about the risks and problems before promoting technologies is a sane thing. If there is a big danger with it we better think about effective solutions to it. I'm rather a transluddite than promoter of every shiny new technology - cobalt bombs are shiny too. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From anders at aleph.se Mon Jan 17 23:06:31 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 17 Jan 2011 23:06:31 +0000 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: <4D34CB77.9090704@aleph.se> Stefano Vaj wrote: > > On the other hand, given Wolfram's Principle of Computation > Equivalence, which I have always found pretty persuasive, there are > not things more intelligent than others once the very low level of > complexity required for exhibiting universal computing features is > reached. There are just things that execute different programs with > different performances. > Hmm, as a neuroscientist I think the computational complexity of a chimpanzee brain and a human brain are essentially identical. Yet I think we both agree that humans can think and do things chimps cannot possibly come up with, no matter the amount of time. There seem to exist pretty firm limits to human cognition such as working memory (limited number of chunks, ~3-5) or complexity of predicates that can be learned from examples (3-4 logical connectives mark the limit). These limits do not seem to be very fundamental to all computation in the world, merely due to the particular make of human brains. Yet an entity that was not as bound by these would be significantly smarter than us. It is easy to train a neural network to recognize clusters in a 50-dimensional space from a few examples, yet humans cannot do this in general. I have a hard time thinking this is just because we have different performance: we are computational systems with very different intrinsic biases and learning abilities. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From michaelanissimov at gmail.com Mon Jan 17 23:11:30 2011 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 17 Jan 2011 15:11:30 -0800 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <20110117123448.GH23560@leitl.org> References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: On Mon, Jan 17, 2011 at 4:34 AM, Eugen Leitl wrote: > On Mon, Jan 17, 2011 at 12:43:35PM +0100, Stefano Vaj wrote: > > > I am still persuaded that the crux of the matter remains a less > superficial > > consideration of concept such as "intelligence" or "friendliness". I > > To be able to build friendly you must first be able to define friendly. > Notice that it's a relative metric, both in regards to the entitity > and the state at time t. > > What is friendly today is not friendly tomorrow. What is friendly to me, > a god, is not friendly to you, a mere human. > This is the basis of Eugen's opposition to Friendly AI -- he sees it as a dictatorship that any one being should have so much responsibility. Our position, on the other hand, is that one being will likely end up with a lot of responsibility whether or not we want it, and to maximize the probability of a favorable outcome, we should aim for a nice agent. The nice thing about our solution is that it works under both circumstances -- whether the first superintelligence becomes unrivaled or not. Eugen's strategy, however, fails if superintelligence does indeed become unrivaled, because considering the possibility in the first place was so reprehensible to him that he could never bring himself to plan for the eventuality. -- Michael Anissimov Singularity Institute singinst.org/blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 17 23:43:17 2011 From: spike66 at att.net (spike) Date: Mon, 17 Jan 2011 15:43:17 -0800 Subject: [ExI] google translator Message-ID: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> I have been anticipating this technology for a long time: http://www.foxnews.com/scitech/2011/01/17/google-translate-app-android-revie w/ Comments please? Has anyone here tried this application? May we have a few samples of translations from English to German to French to English? Then English to French to German to English? How fast is it? If this works as well as I expect, do you suppose it will put an end to universities requiring foreign languages? That exercise seems like mostly a waste of classroom time which could be used drilling science and math, ja? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Jan 18 00:40:18 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 17 Jan 2011 19:40:18 -0500 Subject: [ExI] google translator In-Reply-To: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> Message-ID: 2011/1/17 spike : > I have been anticipating this technology for a long time: > > http://www.foxnews.com/scitech/2011/01/17/google-translate-app-android-review/ > > Comments please?? Has anyone here tried this application?? May we have a few > samples of translations from English to German to French to English?? Then > English to French to German to English?? How fast is it?? If this works as > well as I expect, do you suppose it will put an end to universities > requiring foreign languages?? That exercise seems like mostly a waste of > classroom time which could be used drilling science and math, ja? > Drilling? no please, let's agree that's a waste of time too. Until we get teachers who make science and math as fun and interesting as you know it can be then there is no way to make it palatable to your "proles." Until that happens, we'll continue to be behind we we could be. I had a Philosophy of Mind (& artificial intelligence) class tonight. During the typical introductions, one of my classmates expressed some resentment that "computers" were "taking jobs away from people." I was horrified that anyone could seriously propose this viewpoint. It will be an interesting semester to see if I can persuade him to see the other side of his argument. Tomorrow night I will be learning Spanish (again). My last professor made a point to explain that translation services can never be as good as humans with language. I almost laughed his lack of awareness for machine translation (but it's not funny after all). My guess is that your several language translation is at least as good as a first-year language class' two-language translation. Though we're told that a word-for-word "translation" is not the thing to do; instead interpret the meaning of expressions. I agree: with language conversion applications the language barrier falls to a minor inconvenience. From sen.otaku at googlemail.com Tue Jan 18 00:12:37 2011 From: sen.otaku at googlemail.com (Sen Yamamoto) Date: Mon, 17 Jan 2011 19:12:37 -0500 Subject: [ExI] google translator In-Reply-To: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> Message-ID: Google translator isn't really all that good to start with sadly. :( You can't see the nuances, which are often very important. From spike66 at att.net Tue Jan 18 02:17:10 2011 From: spike66 at att.net (spike) Date: Mon, 17 Jan 2011 18:17:10 -0800 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> Message-ID: <002501cbb6b5$d0a262f0$71e728d0$@att.net> >... On Behalf Of Mike Dougherty ... >...Drilling? no please, let's agree that's a waste of time too... Hmmm, you are right MIke, poor choice of terms on my part. >...Until we get teachers who make science and math as fun and interesting as you know it can be then there is no way to make it palatable to your "proles." Sure, but of course they are not *my* proles. If one wishes to interpret it as a disrespectful term meaning below the ruling elite, I do claim immunity, for I am a prole myself. There are those who tell me I have no class. With these I beg to differ. Low is a class. >... Until that happens, we'll continue to be behind where we could be... Ja, way behind. >... one of my classmates expressed some resentment that "computers" were "taking jobs away from people." I was horrified... Before being horrified, consider they *are* taking some jobs from some people. Think that one overward. Of course they result in more jobs, but ask your classmate to elaborate. >... It will be an interesting semester to see if I can persuade him to see the other side of his argument... While you are at it, try to see his. You and he might actually be in agreement, just looking at the same question from different angles and suffering from difference in use of language. >...My last professor made a point to explain that translation services can never be as good as humans with language... Does your last professor follow the Jeopardy contestant Watson? We have seen software get better and better at chess until it surpassed all humans. Very impressive. We have now seen computers climb the ranks in Jeopardy. Even more impressive. Why not the ability to translate realtime as good or better than humans? >... I almost laughed his lack of awareness for machine translation (but it's not funny after all). My guess is that your several language translation is at least as good as a first-year language class' two-language translation... Here's what I am looking for: the improvement by humans in the skill of using realtime translators. That is the real skill. spike From spike66 at att.net Tue Jan 18 02:21:42 2011 From: spike66 at att.net (spike) Date: Mon, 17 Jan 2011 18:21:42 -0800 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> Message-ID: <002601cbb6b6$7233c690$569b53b0$@att.net> >... On Behalf Of Sen Yamamoto Subject: Re: [ExI] google translator >Google translator isn't really all that good to start with sadly. :( You can't see the nuances, which are often very important... Sen, think instead of evaluating Google translator, imagine the actual skill is in using a translator to convey a nuanced meme. Think of it as analogous to playing a musical instrument. The real question is not 'how good is it?' but rather 'how good are you?' spike From hkeithhenson at gmail.com Tue Jan 18 04:11:52 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 17 Jan 2011 21:11:52 -0700 Subject: [ExI] extropy-chat Digest, Vol 88, Issue 31 In-Reply-To: References: Message-ID: On Mon, Jan 17, 2011 at 7:31 PM, Stefano Vaj wrote: > On 17 January 2011 17:39, Keith Henson wrote: >> As I have often discussed here, there are psychological mechanisms >> that switch humans into an irrational "war mode" when environmental >> conditions are such that war is a better path for genes than the >> alternative. > > This would make it a "rational" war mode by definition, wouldn't it? >From the viewpoint of genes yes. From the viewpoint of the poor schmuck who gets his ass shot off, no. And it only applies when you were fighting for close relatives. That's the real power of Dawkin's Selfish Gene, it teaches you to consider different viewpoints. > In any event, even today we can make machines who can persuasively > behave *as if* they were "angry" or primarily "motivated" by their own > survival and reproduction, killing humans in the process. >From the viewpoint of humans, particularly mine, that seems like a really bad idea. snip Keith From eugen at leitl.org Tue Jan 18 07:23:29 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 18 Jan 2011 08:23:29 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: <20110118072329.GK23560@leitl.org> On Mon, Jan 17, 2011 at 03:11:30PM -0800, Michael Anissimov wrote: > This is the basis of Eugen's opposition to Friendly AI -- he sees it as a This is not the basis. This is one of the many ways I'm pointing out what you're trying to do is undefined. Trying to implement something undefined is going to produce an undefined outcome. That's a classical case being not even wrong. > dictatorship that any one being should have so much responsibility. Yes, I don't think something derived from a monkey's idle fart should have the power to constrain future evolution of the universe. I think that's pretty responsible. > Our position, on the other hand, is that one being will likely end up with a Not one being, a population of beings. No singletons in this universe. Rapidly diversifying population. Same thing as before, only more so. > lot of responsibility whether or not we want it, and to maximize the > probability of a favorable outcome, we should aim for a nice agent. Favorable for *whom*? Measured in what? Nice, as relative to whom? Measured in which? > The nice thing about our solution is that it works under both circumstances > -- whether the first superintelligence becomes unrivaled or not. Eugen's > strategy, however, fails if superintelligence does indeed become unrivaled, Thanks for suggesting my strategies, but I think I can manage on my own. > because considering the possibility in the first place was so reprehensible > to him that he could never bring himself to plan for the eventuality. Isn't it obvious what the plan should be for wannabee Cosmic Despot implementers? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Jan 18 12:07:53 2011 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 18 Jan 2011 23:07:53 +1100 Subject: [ExI] youtube channel of the feral human who shot up the political rally in arizona In-Reply-To: <130698.85325.qm@web46112.mail.sp1.yahoo.com> References: <130698.85325.qm@web46112.mail.sp1.yahoo.com> Message-ID: 2011/1/15 Alan Brooks > > >On Thu, Jan 13, 2011 at 1:19 PM, Stathis Papaioannou? wrote: > > He's psychotic, probably schizophrenic. He should have been treated. > > > Don't be so sure until we can read the examination reports; perhaps he is not psychotic, or maybe he is borderline; it could be he wanted to be famous-- which is being 'crazy like a fox'. > Was Tim McVeigh a psychotic, or was he also crazy like a fox? Or Son Of Sam, Mark David Chapman. Notice how they appeared psychotic at first but copped guilty (Berkowitz, Chapman) pleas or were found to be sane enough for trial. Even the wild Christian-rapist character who kidnapped Elizabeth Smart was found guilty! Frankly, I don't think psychiatry is much more of a science than economics, sociology, political 'science', etc. Societal and professional biases are too prevalent. Psychosis is not culturally defined but a physical disease affecting the brain, as much as a stroke or a brain tumour is. As with most things in medicine, there are cases where the diagnosis is obvious and other cases where there is doubt. It is also not always clear to what extent the psychosis contributed to the crime: I have known schizophrenic patients who tell me that they steal things with impunity because they are likely to be let off on account of their mental illness. Another consideration is that in most jurisdictions being found not guilty for a serious crime such as murder on the grounds of insanity is worse than being found guilty, in that it leads to longer sentences in forensic prisons and more onerous monitoring and other conditions once released. -- Stathis Papaioannou From msd001 at gmail.com Tue Jan 18 12:58:01 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 18 Jan 2011 07:58:01 -0500 Subject: [ExI] google translator In-Reply-To: <002501cbb6b5$d0a262f0$71e728d0$@att.net> References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> Message-ID: On Mon, Jan 17, 2011 at 9:17 PM, spike wrote: > Sure, but of course they are not *my* proles. ?If one wishes to interpret it > as a disrespectful term meaning below the ruling elite, I do claim immunity, > for I am a prole myself. ?There are those who tell me I have no class. ?With > these I beg to differ. ?Low is a class. I attributed the term to you because you have used it here on several occasions. I understand your intent and I still find it amusing every time. Perhaps somewhere it connects with A Streetcar Named Desire where Stanley explains the difference between calling someone of Polish descent a Pollack or a Pole... though there's no intentionally derogatory form such as Prollack (not that you would use intentionally derogatory terms for people) I think of Stanley Kowalski [exemplary prole] making this distinction for us... >>... one of my classmates expressed some resentment that "computers" were > "taking jobs away from people." ?I was horrified... > > Before being horrified, consider they *are* taking some jobs from some > people. ?Think that one overward. ?Of course they result in more jobs, but > ask your classmate to elaborate. > While you are at it, try to see his. ?You and he might actually be in > agreement, just looking at the same question from different angles and > suffering from difference in use of language. Probably. I hope he sees that those people most likely to be displaced by robots are those need most to avail themselves of higher education. From jonkc at bellsouth.net Tue Jan 18 15:24:10 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 18 Jan 2011 10:24:10 -0500 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> Message-ID: <561E865F-FA5C-4684-B5D2-54A1043FDF17@bellsouth.net> On Jan 17, 2011, at 12:14 PM, Stefano Vaj wrote: >> Me: >> The situation is made even more grotesque when the slave in question is astronomically more intelligent than its master. > > A slave? A machine, in fact. Apparently you think that awarding the grand title of "person" should be unrelated to intelligence or other things of merit, and it should be entirely dependent on whether the entity in question is made of meat or semiconductors. I disagree. > one cannot wish at the same time demand an emulation able to show the appropriate degree of > "unfriendliness" to pass a Turing test, and then complain that it is not friendly enough. I have no idea what that means. > > given Wolfram's Principle of Computation Equivalence, which I have always found pretty persuasive, there are not things more intelligent than others once the very low level of complexity required for exhibiting universal computing features is reached. There are just things that execute different programs with different performances. The second program I ever wrote in my life (after one for the Mandelbrot set) was for the Game of Life. It is known that the Game of Life can simulate a Universal Turing Machine, so my childish little program had the potential to perform any calculation in the universe, even a calculation performed by a mighty Jupiter Brain that would produce a Singularity. Unfortunately my silly little program did not live up to its potential. > friendliness and unfriendliness (as "conscience", "identity", etc.) would remain mere projections of the observer - "my car is angry with me today, it does not want to start" Is it also a "mere projection" on your part when a fellow human being seems to be angry with you? > > In any event, to be recognised as "persons", or as "intelligent" entities I would humbly submit that it is of little importance if you consider a super intelligent computer to be a person or not, the important matter is if the super intelligent computer considers you to be a person or not. > Even if the hardware is a mere Chinese Room. Ah the good old Chinese Room, perhaps the stupidest thought experiment ever conceived of by the mind of man. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Jan 18 16:11:16 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 18 Jan 2011 17:11:16 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <4D34CB77.9090704@aleph.se> References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> <4D34CB77.9090704@aleph.se> Message-ID: On 18 January 2011 00:06, Anders Sandberg wrote: > There seem to exist pretty firm limits to human cognition such as working > memory (limited number of chunks, ~3-5) or complexity of predicates that can > be learned from examples (3-4 logical connectives mark the limit). These > limits do not seem to be very fundamental to all computation in the world, > merely due to the particular make of human brains. Yet an entity that was > not as bound by these would be significantly smarter than us. Simply a matter of performances. If a man with a piece of paper can operate a cellular automaton, and a cellular automaton can demonstrably perform any kind of computation at all, a man with a piece of paper can do whatever any non-quantum computer can do, given enough time (and paper). So, yes, we could have human emulations running faster. I am inclined to postulate that they are externally indistinguishable, however, from a man with a powerful enough computer at his fingertips. -- Stefano Vaj From stefano.vaj at gmail.com Tue Jan 18 15:58:19 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 18 Jan 2011 16:58:19 +0100 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <4D34C897.7030102@aleph.se> References: <4D338A8B.2030603@aleph.se> <4D34C897.7030102@aleph.se> Message-ID: On 17 January 2011 23:54, Anders Sandberg wrote: > Stefano Vaj wrote: >> I am still persuaded that the crux of the matter remains a less >> superficial consideration of concept such as "intelligence" or >> "friendliness". ?I suspect that at any level of computing power, >> "motivation" would only emerge if a deliberate effort is made to emulate >> human (or at least biological) evolutionary artifacts such as sense of >> identity, survival istinct, etc., which would be certainly interesting, >> albeit probably much less crucial to their performances and flexibility than >> one may think. > > "Motivation" does not have to be anything like human motivations. As > Wikipedia says, "Motivation is the driving force which causes us to achieve > goals." - a chess playing system can be said to have a motivation to win > games built into itself, just like Schmidthuber's G?del machine and Hutter's > AIXI have a motivation to maximize their utility functions. Absolutely. But of course we could also try and emulate with arbitrary degrees of accuracy "human-like" motivations. Even though this would be an interesting and satisfying achievement per se, it is not clear, besides performance issues, what it would have to do with "intelligence" and "risk" in a broader and more rigorous sense, but I suspect that only such an emulation would be considered as an "AGI" by those who discuess the "friendliness" and "unfriendliness" thereof. > Actually thinking about the risks and problems before promoting technologies > is a sane thing. If there is a big danger with it we better think about > effective solutions to it. I'm rather a transluddite than promoter of every > shiny new technology - cobalt bombs are shiny too. Absolutely right again. I am only saying a) that there is no shortage of people presenting the case against (single) technology(ies) and/or of the precautionary principle; b) it is by no means obvious that computers are made any more (or less, for that matter: see under Robot-God) dangerous by "intelligence" in the AGI sense. -- Stefano Vaj From stefano.vaj at gmail.com Tue Jan 18 16:32:06 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 18 Jan 2011 17:32:06 +0100 Subject: [ExI] extropy-chat Digest, Vol 88, Issue 31 In-Reply-To: References: Message-ID: On 18 January 2011 05:11, Keith Henson wrote: >> This would make it a "rational" war mode by definition, wouldn't it? > >From the viewpoint of genes yes. ?From the viewpoint of the poor > schmuck who gets his ass shot off, no. If "rational" is a synonim of "good", you are right. But even good is essentially a matter of perspectives, isn't it? >> In any event, even today we can make machines who can persuasively >> behave *as if* they were "angry" or primarily "motivated" by their own >> survival and reproduction, killing humans in the process. > > From the viewpoint of humans, particularly mine, that seems like a > really bad idea. Their "general intelligence" or their possible ability to pass a Turing test, would however seem immaterial to how bad such an idea is. Moreover, aren't such machines currently manufactured by humans for their own purposes? This makes me doubt that a general "human" viewpoint exists in this respect. Unless in the sense of course that cars, e.g., have been fighting since the fifties a war against human beings and are perhaps winning it, given that today a family in Milan has more cars than children in average. -- Stefano Vaj From spike66 at att.net Tue Jan 18 19:44:39 2011 From: spike66 at att.net (spike) Date: Tue, 18 Jan 2011 11:44:39 -0800 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> Message-ID: <005501cbb748$2537bfd0$6fa73f70$@att.net> ... On Behalf Of Mike Dougherty Subject: Re: [ExI] google translator On Mon, Jan 17, 2011 at 9:17 PM, spike wrote: >> Sure, but of course they are not *my* proles... >I attributed the term to you because you have used it here on several occasions. I understand your intent and I still find it amusing every time... {8^D Mike do tell me you have read Orwell's 1984, from which I borrowed the term. If not, turn off the computer, get thee to the library forthwith me lad. >... though there's no intentionally derogatory form such as Prollack... One may use derogatory terms if done respectfully, and one is derogging oneself. One must be a member of the group being derogged. Otherwise, one should always be rogatory. I try to rog others at every opportunity. We should all strive to be rogmeisters, rogging others as we ourselves would be rogged. >>>... one of my classmates expressed some resentment that "computers" were "taking jobs away from people." ?I was horrified... >> Before being horrified, consider they *are* taking some jobs from some people... >Probably. I hope he sees that those people most likely to be displaced by robots are those need most to avail themselves of higher education. Ja. When you discuss this with your classmate, keep in mind those who have children in college, a car payment and a mortgage. The bank wants their money NOW, not when a prole finishes a new degree, which may or may not get her a new job, but will definitely get her a deep pile of new debts. Consider a phenomenon which you may never face, I hope not, where every position you apply for you are told you are overqualified. After a while the term overqualified starts to sound more and more like a code word for "too old." spike From msd001 at gmail.com Wed Jan 19 03:09:40 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 18 Jan 2011 22:09:40 -0500 Subject: [ExI] google translator In-Reply-To: <005501cbb748$2537bfd0$6fa73f70$@att.net> References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> Message-ID: On Tue, Jan 18, 2011 at 2:44 PM, spike wrote: > {8^D ?Mike do tell me you have read Orwell's 1984, from which I borrowed the > term. ?If not, turn off the computer, get thee to the library forthwith me > lad. Old dog, here's a new trick: http://www.george-orwell.org/1984 > One may use derogatory terms if done respectfully, and one is derogging > oneself. ?One must be a member of the group being derogged. ?Otherwise, one > should always be rogatory. ?I try to rog others at every opportunity. ?We > should all strive to be rogmeisters, rogging others as we ourselves would be > rogged. ok then, consider yourself rogged (above) > After a while the term overqualified starts to sound more and more like a > code word for "too old." hmm.. "I'm overqualified for all-nighters" - I see what you mean :) From siproj at gmail.com Wed Jan 19 03:38:59 2011 From: siproj at gmail.com (_ _) Date: Tue, 18 Jan 2011 21:38:59 -0600 Subject: [ExI] Do You Need a Chinese Bank Account? Message-ID: Do You Need a Chinese Bank Account? http://online.wsj.com/article/SB10001424052748704307404576080222812076888.html -- siproj at gmail.com Creator of alt.inventors and keeper of the Official alt.inventors FAQ despite what some alt.config sysadmin/waste of time/bandwidth actions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 19 04:55:50 2011 From: spike66 at att.net (spike) Date: Tue, 18 Jan 2011 20:55:50 -0800 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> Message-ID: <003101cbb795$254e4e10$6feaea30$@att.net> ... On Behalf Of Mike Dougherty ... >...Old dog, here's a new trick: http://www.george-orwell.org/1984 Actually I prefer to think of myself as a middle aged dog. {8^D >> One may use derogatory terms if done respectfully, and one is derogging oneself... >ok then, consider yourself rogged (above) I do, early and often thanks. I always like to do things my own way in my own time, and fortunately for me, people around often have kind things to say, so for that reason I consider myself a rogged individualist. >> After a while the term overqualified starts to sound more and more like a code word for "too old." >hmm.. "I'm overqualified for all-nighters" - I see what you mean :) Me too. {8-[ Of course it depends on what activity we are talking about. I am quite skilled at sleeping all night. Something we used to discuss a long time ago, we are now seeing for the first time: the elderly computer users being left behind. My mother has been a computer user for almost 40 years now, owned a PDP 11-780 for an accounting business back in the early 70s. Now she is being overwhelmed by the pace of change in operating systems. It has long been a worry to me that we don't do enough to make technology accessible for the superannuated. IBM's Jeopardy-playing Watson exercise makes me confident we will soon have computers that operate via voice command, or respond to conversation, to be an acceptable companion for the elderly. They should: we have good voice recognition, we have good inference software. I would think we could already do as good as a semi-senile nursing home cohabitant for a software based companion for the elderly. Some of you coding hipsters who know from conversational software, how far out is this now? Five years? Ten? Why don't we have stereos and TVs set up to where we just tell it what we want, and it goes off and does it? Why do we need to learn its language instead of it learning ours? spike From pharos at gmail.com Wed Jan 19 12:53:42 2011 From: pharos at gmail.com (BillK) Date: Wed, 19 Jan 2011 12:53:42 +0000 Subject: [ExI] google translator In-Reply-To: <003101cbb795$254e4e10$6feaea30$@att.net> References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Wed, Jan 19, 2011 at 4:55 AM, spike wrote: > Something we used to discuss a long time ago, we are now seeing for the > first time: the elderly computer users being left behind. ?My mother has > been a computer user for almost 40 years now, owned a PDP 11-780 for an > accounting business back in the early 70s. ?Now she is being overwhelmed by > the pace of change in operating systems. ?It has long been a worry to me > that we don't do enough to make technology accessible for the superannuated. > > IBM's Jeopardy-playing Watson exercise makes me confident we will soon have > computers that operate via voice command, or respond to conversation, to be > an acceptable companion for the elderly. ?They should: we have good voice > recognition, we have good inference software. ?I would think we could > already do as good as a semi-senile nursing home cohabitant for a software > based companion for the elderly. > > Some of you coding hipsters who know from conversational software, how far > out is this now? ?Five years? ?Ten? ?Why don't we have stereos and TVs set > up to where we just tell it what we want, and it goes off and does it? ?Why > do we need to learn its language instead of it learning ours? > > Several different subjects combined here. First. No, I don't think voice operated computers will ever appear in general use. Think about it. What happens when you get a group of people all shouting at their handheld computers? It's bad enough listening to other people's mobile phone conversations. There is a place for specialised applications such as voice recognition entry systems. Second. Old computer users fall into two groups. Old expert users and old general users. Old general users only want a few applications. Their main interest is annoying their relatives. ;) So just email, Facebook chat and photos, perhaps online banking and news browsing. The new Chrome OS laptops or even tablet computers makes this very easy to do. Old expert users have the knowledge to pick and choose what they want. Most are just not interested in living in a world of continual gossip. So Facebook, texting, IMS, even mobile phones, get little used. They will probably have a small laptop that multiboots several systems depending on what they feel like playing with. They will be fixing *your* computer problems! BillK From sparge at gmail.com Wed Jan 19 14:24:42 2011 From: sparge at gmail.com (Dave Sill) Date: Wed, 19 Jan 2011 09:24:42 -0500 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Wed, Jan 19, 2011 at 7:53 AM, BillK wrote: > On Wed, Jan 19, 2011 at 4:55 AM, spike wrote: > > > Something we used to discuss a long time ago, we are now seeing for the > > first time: the elderly computer users being left behind. My mother has > > been a computer user for almost 40 years now, owned a PDP 11-780 for an > > accounting business back in the early 70s. > Probably a PDP 11/70. The 11-780 was a VAX, one of the PDP's successors. > First. > No, I don't think voice operated computers will ever appear in general use. > Think about it. What happens when you get a group of people all > shouting at their handheld computers? It's bad enough listening to > other people's mobile phone conversations. > There is a place for specialised applications such as voice > recognition entry systems. > We already have swarms of people talking into their phones in public, as you mentioned. What difference would it make if they were actually talking *with* their phones? Second. > Old computer users fall into two groups. Old expert users and old general > users. > > Old general users only want a few applications. Their main interest is > annoying their relatives. ;) So just email, Facebook chat and photos, > perhaps online banking and news browsing. > The new Chrome OS laptops or even tablet computers makes this very easy to > do. > < > http://gadgetwise.blogs.nytimes.com/2011/01/18/googles-chrome-os-putting-everything-in-the-browser-window/ > > > > Old expert users have the knowledge to pick and choose what they want. > Most are just not interested in living in a world of continual gossip. > So Facebook, texting, IMS, even mobile phones, get little used. They > will probably have a small laptop that multiboots several systems > depending on what they feel like playing with. They will be fixing > *your* computer problems! Don't know if I qualify as "old" at 51, but I'm definitely an expert user and I don't fit into your second category. I text (it's handy, and the kids prefer it), I have a smart phone (Dell Streak, Android), I don't own a laptop, but, if I did, it wouldn't multiboot: it'd run Linux, and any other OS's I want/need would run in VMs--but I have no need for Windows or MacOS. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Jan 19 15:34:40 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 19 Jan 2011 08:34:40 -0700 Subject: [ExI] Rational vs good Message-ID: On Wed, Jan 19, 2011 at 5:00 AM, Stefano Vaj wrote: > > On 18 January 2011 05:11, Keith Henson wrote: >>> This would make it a "rational" war mode by definition, wouldn't it? >> > >From the viewpoint of genes yes. ?From the viewpoint of the poor >> schmuck who gets his ass shot off, no. > > If "rational" is a synonim of "good", you are right. But even good is > essentially a matter of perspectives, isn't it? Indeed. For you a hamburger is good. From the viewpoint of a cow . . . . Keith From msd001 at gmail.com Thu Jan 20 03:08:02 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 19 Jan 2011 22:08:02 -0500 Subject: [ExI] google translator In-Reply-To: <003101cbb795$254e4e10$6feaea30$@att.net> References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Tue, Jan 18, 2011 at 11:55 PM, spike wrote: > Some of you coding hipsters who know from conversational software, how far > out is this now? ?Five years? ?Ten? ?Why don't we have stereos and TVs set > up to where we just tell it what we want, and it goes off and does it? ?Why > do we need to learn its language instead of it learning ours? Because the average person is still better at learning arbitrary devices than the devices are at learning arbitrary people. There's also a huge problem of "interim technologies" which exist between the obsolete products of today and the really obvious solutions of tomorrow - it's a hybrid of today's tech and cutting edge prices. Not very satisfying, but ensures that the next minor increment can be marketed as an exciting gotta-have-it upgrade. From avantguardian2020 at yahoo.com Thu Jan 20 04:21:27 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 19 Jan 2011 20:21:27 -0800 (PST) Subject: [ExI] Rational vs good In-Reply-To: References: Message-ID: <980798.96957.qm@web65603.mail.ac4.yahoo.com> ----- Original Message ---- > From: Keith Henson > To: extropy-chat at lists.extropy.org > Sent: Wed, January 19, 2011 7:34:40 AM > Subject: [ExI] Rational vs good > > On Wed, Jan 19, 2011 at 5:00 AM,? Stefano Vaj wrote: > > > > On 18 January 2011 05:11, Keith Henson wrote: > >>> This would make it a "rational" war mode by definition, wouldn't it? > >> > > >From the viewpoint of genes yes. ?From the viewpoint of the poor > >> schmuck who gets his ass shot off, no. > > > > If "rational" is a synonim of "good", you are right. But even good is > > essentially a matter of perspectives, isn't it? > > Indeed.? For you a hamburger is good.? From the viewpoint of a cow . . . . Well that would depend on whether the cow was a utilitarian or not. From a utilitarian point of view the cow might be inclined to think hamburger was good because it?allows the existence of some 1.3 billion heads of cattle.?About two?orders of magnitudes more than the wild bovine population and probably more than the ecosystem would otherwise allow. Especially considering that a lot of large carnivores like the taste of beef. Of course this might be considered an indictment against utilitarianism rather than "good". Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From spike66 at att.net Thu Jan 20 06:55:25 2011 From: spike66 at att.net (spike) Date: Wed, 19 Jan 2011 22:55:25 -0800 Subject: [ExI] battery-less hybrid Message-ID: <000601cbb86f$047e7c70$0d7b7550$@att.net> Cool, Chrysler is doing something with an idea I have been kicking around for years, hydraulic drive: http://alttransport.com/2011/01/chrysler-announces-battery-less-hybrid/ They are using nitrogen as the compression medium, but this has its drawbacks. Every time you compression cycle the nitrogen, it gets hot, then cools before the decompression phase, so energy is lost. My own vision of that idea uses a heavier but actually more efficient variation: it uses big metal coil springs as an energy storage medium instead of nitrogen. They compress and decompress with lower hysteresis (less energy loss per cycle) than nitrogen. So we would have hydraulic drive as in the Chrysler experimental vehicle, a small IC motor supplying the pressure, energy stored in the coil springs using a cylinder with hydraulic fluid on one side of a piston, a coil spring on the other. We could have several cylinders. Advantages: it would use energy from the springs during acceleration, then recompress the springs during steady state cruise or braking, with an IC motor of about 20 kW, a twin cylinder, roughly 500cc displacement, running near full throttle most of the time. Disadvantages: that arrangement is reaaally heavy, so I don't know if it is an advantage over this nitrogen notion. Actually I suspect it isn't. {8-[ Other disadvantage: it isn't a race car. It's heavy, acceleration would be leisurely, top speed a bit on the yawnful side too. There would be leakage past the dynamic seal between piston and cylinder, which would introduce some inefficiency. Lets see if Chrysler can work out the hydraulic drive. If so, I may propose this in place of their compressed nitrogen energy storage, or perhaps try to build something like it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Jan 20 10:02:39 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 20 Jan 2011 02:02:39 -0800 Subject: [ExI] mass transit again In-Reply-To: References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> Message-ID: <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> On Jan 12, 2011, at 1:46 AM, Alfio Puglisi wrote: > 2011/1/8 spike > This is an example of what I mentioned a few days ago about being a way bigger threat to society than is global warming, this bigger threat is feral humans: > > > http://www.cnn.com/2011/CRIME/01/07/station.videotaped.incident/index.html?hpt=T2 > > > Last week it was this: > > > http://www.bradenton.com/2010/12/27/2837235/manatee-sheriff-couple-attacked.html > > > It is a reason why I think most public transit notions are a dead end. Individual cars serve as suits of armor, providing a defensive barrier. > > > One can have a lot of legitimate reasons to prefer cars to public transit, but safety is not really one of them. Car accidents cause about 6 million injuries and 40,000 deaths per year in the US alone. People killed on mass transit is a minuscule fraction of that, even correcting for the number of travelers. The main problem with most mass transit is that it is less fuel efficient by far than even private cars with only one person in them. The reason is that a mass transit system must run a substantial amount of time without enough passengers (off peak hours) to remotely feel those heavy energy drinking large vehicles. This has been studied and that is what the numbers say. Next solution.. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Jan 20 10:13:04 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 20 Jan 2011 02:13:04 -0800 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <4D2DCCFE.6010609@gmail.com> References: <4D2CBA29.6030008@gmail.com> <4D2DCCFE.6010609@gmail.com> Message-ID: <5D99069D-C463-44DE-BBD6-F35B1AFC26A1@mac.com> On Jan 12, 2011, at 7:47 AM, AlgaeNymph wrote: > On 1/11/11 5:42 PM, Adrian Tymes wrote: >> >> 1) What is the boundary between "enhancement" and "medicine"? > > Medicine is considered an act of caring, unless it's professional and technological and icky corporate bad. Enhancement is considered cheating (steroids!), unless it involves Hard Work and natural supplements. Medicine is currently only defined as curing disease - departure from the norm. So aging, since it affects all, is considered the norm and it is officially non-medicine to attempt to cure it or to prescribe for it or claim your product helps with stop it. The FDA will not approve a true enhancement drug put forward as such. Bizarre, but that seems to be the way of it. > >> 1a) Does, say, curing cancer necessarily fall into only one of the two? > > Medicine, but it's a rich white man's disease. What we should really be doing is preventing disease caused by Unhealthy American Diets. > Cancer affects everyone of course. >> 1b) What about prosthetics? > > Medicine, because it's Restoring the Balance. > >> 1c) What about prosthetics that exceed human baseline performance? > > Permissible only if the enhancement is accidental. > Yep. This is the official medical utterly insane position. >> 2) What part of "making life better for everyone (who wants a better life) and >> eliminating many of the root causes of evil (resource scarcity, fear of death, >> lack of understanding)" is not a long term and more complex form of "fighting >> evil"? > > "But is it really better?" The people who control the socially-accepted definition of morality feel that all you need is love and organic gardening. Wanting more is Consumerism, which is what the Corporations make you buy into! Instead, we should be Respecting the Earth and Doing Our Part in the Community. > YES, it obviously is by definition. >> 3) Is not part of the discomfort we cause, because we propose to do >> something about evils that most people have accepted as inevitable? > > Oh, but you'll only improve quality of life for The Rich (who may as well be the aliens from They Live) and create a caste system. Also, without death, we'll have overpopulation and Rich People living forever. > > At this point, I expect you'll tell me that there's nothing I can do about such people and that I should just ignore them. How is that a good idea when they're not ignoring us while getting more people listening to them than we are? > >> That last one may be the most significant part. People make up all sorts of >> evil motives for us, but they rarely turn out to be true. > > How do we convince the public otherwise? Only one way. Produce the benefits we seek. Build it and they will come. Off-label use of several drugs for enhancement is rampant across multiple levels of society now. This will only increase. Build it and they will come. Insist by all means possible on the freedom to do so and never allow it to be taken away. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Jan 20 10:26:33 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 20 Jan 2011 02:26:33 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <485988.6615.qm@web114416.mail.gq1.yahoo.com> References: <485988.6615.qm@web114416.mail.gq1.yahoo.com> Message-ID: <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> On Jan 12, 2011, at 12:28 PM, Ben Zaiboc wrote: > "spike" wrote: > >> I ask you then: suppose I personally knew a way to write >> something >> inspirational. I know an inspiring story based on >> something that actually >> happened, which I could fictionalize to protect the >> identities, and it >> involves one who came thru a very trying time by faith in >> god. It really is >> a good story. But you know and I know I am a flaming >> atheist now. I could >> use a pseudonym. Is it ethical for me to write >> it? Would I be lying in a >> sense? I have been struggling with this question for >> years, and I am asking >> for advice here. Johnny? Adrian? >> Ben? Damien? Keith? Others? > > > Of course you wouldn't be lying, not if you know it's a true story. > As for whether you *should* write it, that's another thing. There are pros and cons. One of the cons is providing fuel for the god-squad. Why would it be unethical to admit the truth that belief in god, or at least some applications thereof, can make it easier to get through at least some types of very challenging times. That is pretty well known. Doesn't mean god is real or that religion is a more good thing than not or anything like that. So how would relating such a story in any wise be wrong or a form of lying? - samantha From sjatkins at mac.com Thu Jan 20 10:28:11 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 20 Jan 2011 02:28:11 -0800 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> <20110112160036.qr0w118sesowckws@webmail.natasha.cc> Message-ID: On Jan 12, 2011, at 2:13 PM, Stefano Vaj wrote: > On 12 January 2011 22:00, wrote: > Medicine is a very necessary component of human enhancement. The person asking you this question was not clear and you could have told him/her that one of the most beneficial aspects of enhancement is its ethical and courageous use of medicine to help cure people from dreaded diseass and tragic injuries. > > An additional, easy retort is that *medicine itself* has never been perfectly orthodox from a utiitarian POV nor "sustainable" by any means. At any given time, more human lives and suffering would have been spared by reallocating globally the resources devoted to medical research, and to actual day-by-day medicine for that matter, to some other end, such as feeding the hungry, increasing safety, etc. Prove it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Jan 20 10:36:32 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 20 Jan 2011 02:36:32 -0800 Subject: [ExI] Michael Nielsen on Singularity In-Reply-To: <10F076988BBF4FAD82910A1175509450@PCserafino> References: <10F076988BBF4FAD82910A1175509450@PCserafino> Message-ID: <705E53EE-31B3-4E76-B126-34D3F2814F96@mac.com> On Jan 13, 2011, at 7:50 AM, scerir wrote: > What should a reasonable person believe about the Singularity? > http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/ > A 'reasonable' person, that is a person able to reason, would decide for themselves what they thought about singularity or anything else. They would snort in derision at the idea that they need to read what someone else things to get and idea of what they should conclude if they bothered to examine the subject and make up their own mind. They would be even more derisive over the notion that being able to reason is in the least compatible with being told what you 'should' think. BTW, I find the approach taken quite thoroughly unreasonable and a rather specious use of probability. - samantha From stefano.vaj at gmail.com Thu Jan 20 10:35:37 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 20 Jan 2011 11:35:37 +0100 Subject: [ExI] Rational vs good In-Reply-To: <980798.96957.qm@web65603.mail.ac4.yahoo.com> References: <980798.96957.qm@web65603.mail.ac4.yahoo.com> Message-ID: On 20 January 2011 05:21, The Avantguardian wrote: > Well that would depend on whether the cow was a utilitarian or not. That's a very interesting point. But I would also add that it is by no means obvious that even for things who are immediately below the same food chain the "proletarians all over the world, unite!" reflects the real conflict of interest. In the competition for access to scarce resources, say, a breeder and its cattle share a common interest against the neighbouring breeder and its cattle. So, Terminator-like scenarios seem irrimediably naive. In fact, the obvious solution to terminators "going rogue" is not a call to arms to "humans" to fight them, but simply to send better terminators to chase them down. And even in the more likely scenario where killing machines are directly or indirectly operated by other humans (say, the drones in Afghanistan of today) I sincerely doubt that those affected see it as an episode of the war between the men and the machines. What is in place is a battle between men and technologies against other men, technologies, and possibly even animals (which I believe to participate to some extent to the insurgents' war effort). -- Stefano Vaj From stefano.vaj at gmail.com Thu Jan 20 10:40:50 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 20 Jan 2011 11:40:50 +0100 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> <20110112160036.qr0w118sesowckws@webmail.natasha.cc> Message-ID: 2011/1/20 Samantha Atkins : > An additional, easy retort is that *medicine itself* has never been > perfectly orthodox from a utiitarian POV nor "sustainable" by any means. At > any given time, more human lives and suffering would have been spared by > reallocating globally the resources devoted to medical research, and to > actual day-by-day medicine for that matter, to some other end, such as > feeding the hungry, increasing safety, etc. > > Prove it. How much money is required to save ten children in Africa from death by hunger? How much for a heart transplant? The arithmetics seems easy enough. But even within medicine itself, traditional, time-honoured ethical rules which provides for doing first everything of anything for the patient at hand sharply collide with a utilitarian point of view, which might require that scalpel be dropped during surgery should the physician's attention would be more productive in terms of general happiness if devoted instead to the five new cases in the anteroom. -- Stefano Vaj From eugen at leitl.org Thu Jan 20 11:16:39 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 20 Jan 2011 12:16:39 +0100 Subject: [ExI] mass transit again In-Reply-To: <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> Message-ID: <20110120111639.GF23560@leitl.org> On Thu, Jan 20, 2011 at 02:02:39AM -0800, Samantha Atkins wrote: > The main problem with most mass transit is that it is less fuel efficient > by far than even private cars with only one person in them. The reason Apart from higher street density (transport capacity) issues this strikes me as extremely inplausible. Do you have references for this? > is that a mass transit system must run a substantial amount of time > without enough passengers (off peak hours) to remotely feel those > heavy energy drinking large vehicles. This has been studied and I don't see them heavy or fuel-inefficient (trains even use regenerative braking), and of course there's enough passengers in off-peak hours as well. > that is what the numbers say. Next solution.. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Thu Jan 20 11:36:20 2011 From: pharos at gmail.com (BillK) Date: Thu, 20 Jan 2011 11:36:20 +0000 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> <20110112160036.qr0w118sesowckws@webmail.natasha.cc> Message-ID: On Thu, Jan 20, 2011 at 10:40 AM, Stefano Vaj wrote: > But even within medicine itself, traditional, time-honoured ethical > rules which provides for doing first everything of anything for the > patient at hand sharply collide with a utilitarian point of view, > which might require that scalpel be dropped during surgery should the > physician's attention would be more productive in terms of general > happiness if devoted instead to the five new cases in the anteroom. > > Which is why the utilitarian POV is detested by humanity except in immediate life or death circumstances. Like triage - a doctor choosing who to treat when a bomb goes off in a crowded stadium. It is well known that the majority of medical costs are spent in the last year or two of life. So the utilitarian POV would provide euthanasia chambers for that group and all the money saved could be used to start another war somewhere. If transhumanists advocate utilitarian medical treatment they will be outcasts in society and universally hated. BillK From anders at aleph.se Thu Jan 20 11:44:30 2011 From: anders at aleph.se (Anders Sandberg) Date: Thu, 20 Jan 2011 11:44:30 +0000 Subject: [ExI] Limiting factors of intelligence explosion speeds Message-ID: <4D38201E.8040703@aleph.se> One of the things that struck me during our Winter Intelligence workshop on intelligence explosions was how confident some people were about the speed of recursive self-improvement of AIs, brain emulation collectivies or economies. Some thought it was going to be fast in comparision to societal adaptation and development timescales (creating a winner takes all situation), some thought it would be slow enough for multiple superintelligent agents to emerge. This issue is at the root of many key questions about the singularity (one superintelligence or many? how much does friendliness matter?) It would be interesting to hear this list's take on it: what do you think is the key limiting factor for how fast intelligence can amplify itself? Some factors that have been mentioned in past discussions: Economic growth rate Investment availability Gathering of empirical information (experimentation, interacting with an environment) Software complexity Hardware demands vs. available hardvare Bandwidth Lightspeed lags Clearly many more can be suggested. But which bottlenecks are the most limiting, and how can this be ascertained? -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From anders at aleph.se Thu Jan 20 13:06:37 2011 From: anders at aleph.se (Anders Sandberg) Date: Thu, 20 Jan 2011 13:06:37 +0000 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> <20110112160036.qr0w118sesowckws@webmail.natasha.cc> Message-ID: <4D38335D.7090305@aleph.se> BillK wrote: > If transhumanists advocate utilitarian medical treatment they will be > outcasts in society and universally hated. > Depends on where you are. In Scandinavian countries utilitarian thinking underlies a lot of medicine, and Sweden's most well-known ethicist Torbj?rn T?nnsj? is a fierce utilitarian - and often lectures to the medical students in their medical ethics courses. What actually goes on is of course that medical ethics is a mixture of systems and approaches, and that the utilitarian aspect is tempered by a bunch of other deontological considerations. Utilitarianism is however not the end all of consequentialism, there are plenty of more sophisticated forms. Just dropping the calculation of utilities and instead looking at more general patterns of consequences produces plenty of fairly acceptable systems (like prioritarianism, where one should give extra weight to the people in most need). Rule consequentialism (act according to rules that have good consequences) can approximate deontological ethics to an arbitrary degree. The real problem is that setting up systems based on a few core principles that everything follows from logically is 1) very hard, 2) will produce plenty of unforeseen consequences that people dislike fiercely. This is true in ethics, in politics (see David Friedman's takedown of the idea that one can express libertarianism in a few axioms) and in setting up institutions. Besides the historical reasons, this is why medical ethics does not work by the government or someone else declaring a set of axioms and then the doctors implement them logically. The actual practice is much more like how laws are built from prior rulings in a common law system - messy, complex, updateable and influenced by many institutional pressures. We might wish to improve on this, but that means getting our hands dirty in politics, debates and stakeholder meetings. I think the transhumanist goal ought to be to establish a bunch of our concepts as valid and relevant: morphological freedom, Freita's volitional normative health concept, and enhancement as a valid medical pursuit. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From sondre-list at bjellas.com Thu Jan 20 13:23:37 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Thu, 20 Jan 2011 14:23:37 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> References: <485988.6615.qm@web114416.mail.gq1.yahoo.com> <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> Message-ID: It's clearly unethical to write something you know are untrue, or tell someone a lie, just to give them comfort. Ignoring small white lies, which can be beneficial (if your wife is afraid of spiders and you rub one off her shoulder, you are not harming her by telling her that it was some dust on her shoulder that you just removed). Samantha is correct about faith being more widespread amongst the less fortunate. Christianity is a religion for the slaves, while some of the faiths in ancient Rome was a religion for the rulers, same as Islam. Religions and faith follow the society and society is unfortunately very much shaped by the religions after they take root in it. Ignorance is bliss, believing in something bigger than yourselves to escape the harsh realities, is a mechanism of our minds. I think it's evident that taken to the extreme, this mechanism will be a negative force in the society and it's a mechanism that religious leaders use to manipulate the masses. So you should consider the ramifications of your story, will it be a story that makes people understand the world, reality and society better? - or is it just a story that will further fuel an addiction to whatever fantasy a person have created in their minds? - Sondre On Thu, Jan 20, 2011 at 11:26 AM, Samantha Atkins wrote: > > On Jan 12, 2011, at 12:28 PM, Ben Zaiboc wrote: > > > "spike" wrote: > > > >> I ask you then: suppose I personally knew a way to write > >> something > >> inspirational. I know an inspiring story based on > >> something that actually > >> happened, which I could fictionalize to protect the > >> identities, and it > >> involves one who came thru a very trying time by faith in > >> god. It really is > >> a good story. But you know and I know I am a flaming > >> atheist now. I could > >> use a pseudonym. Is it ethical for me to write > >> it? Would I be lying in a > >> sense? I have been struggling with this question for > >> years, and I am asking > >> for advice here. Johnny? Adrian? > >> Ben? Damien? Keith? Others? > > > > > > Of course you wouldn't be lying, not if you know it's a true story. > > As for whether you *should* write it, that's another thing. There are > pros and cons. One of the cons is providing fuel for the god-squad. > > Why would it be unethical to admit the truth that belief in god, or at > least some applications thereof, can make it easier to get through at least > some types of very challenging times. That is pretty well known. Doesn't > mean god is real or that religion is a more good thing than not or anything > like that. So how would relating such a story in any wise be wrong or a > form of lying? > > - samantha > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Sondre Bjell?s http://www.sondreb.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Jan 20 14:33:47 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 20 Jan 2011 15:33:47 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D38201E.8040703@aleph.se> References: <4D38201E.8040703@aleph.se> Message-ID: <20110120143347.GH23560@leitl.org> On Thu, Jan 20, 2011 at 11:44:30AM +0000, Anders Sandberg wrote: > One of the things that struck me during our Winter Intelligence workshop > on intelligence explosions was how confident some people were about the > speed of recursive self-improvement of AIs, brain emulation collectivies > or economies. Some thought it was going to be fast in comparision to > societal adaptation and development timescales (creating a winner takes > all situation), some thought it would be slow enough for multiple > superintelligent agents to emerge. This issue is at the root of many key It doesn't matter if the emergence population bottleneck is narrow, it will still radiate after spatial distribution. Unless you're looking a swarm mind, with individual agents capable of semi-useful but fast local response, while decisions up the hierarchy are more informed but also more slow. > questions about the singularity (one superintelligence or many? how much Many. For any nontrivial sized distributed system capable of meaningful local response it must appear as a nonsingleton. > does friendliness matter?) Does the AI have the Buddha nature? > It would be interesting to hear this list's take on it: what do you > think is the key limiting factor for how fast intelligence can amplify > itself? What's the shortest possible gate delay? Add one or two zeroes, that's the ballpark of a single iteration. > Some factors that have been mentioned in past discussions: > Economic growth rate If there's a whole planet of hardware to 0wn, you can grow at about the speed of light, until you run out of resources, and have actually to touch the physical layer to extrude more substrate. > Investment availability Food availability. Joules, atoms. > Gathering of empirical information (experimentation, interacting with > an environment) Virtual environment is pretty fast for co-evolution runs. Evaluating easy stuff should be possible for ~ms generations. > Software complexity There's no software, at least no more software than we carry between our ears. > Hardware demands vs. available hardvare There's a whole smorebrod buffet of hardware to take before you ever have to go to the kitchen. > Bandwidth I have a couple of 10 GBit/s optics on my desktop. Light Peak should do 100 GBit/s. There's fundamentally no reason to not have TBit/s links, and hundreds or thousands of these on a local hyperlattice loop. That is quite a lot of bandwidth. > Lightspeed lags Which is why you pack the switches as densely as possible, and use the lowest possible complexity for each assembly. > Clearly many more can be suggested. But which bottlenecks are the most > limiting, and how can this be ascertained? The highest complexity is evaluating nontrivial behaviour. Motorics is easy, tasks which take people decades take time to check. Multiply by a few million rounds, that's going to take a while. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Jan 20 14:39:29 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 20 Jan 2011 15:39:29 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <20110120143347.GH23560@leitl.org> References: <4D38201E.8040703@aleph.se> <20110120143347.GH23560@leitl.org> Message-ID: <20110120143929.GA23793@leitl.org> On Thu, Jan 20, 2011 at 03:33:47PM +0100, Eugen Leitl wrote: > > There's a whole smorebrod buffet of hardware to take before you That'd be Sm?rg?sbord, sorry. > ever have to go to the kitchen. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From natasha at natasha.cc Thu Jan 20 15:29:57 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 20 Jan 2011 09:29:57 -0600 Subject: [ExI] Reframing transhumanism as good vs. evil In-Reply-To: <4D38335D.7090305@aleph.se> References: <364107.88768.qm@web114419.mail.gq1.yahoo.com> <20110112160036.qr0w118sesowckws@webmail.natasha.cc> <4D38335D.7090305@aleph.se> Message-ID: Anders wrote: BillK wrote: > If transhumanists advocate utilitarian medical treatment they will be > outcasts in society and universally hated. > [cut] "Utilitarianism is however not the end all of consequentialism, there are plenty of more sophisticated forms. Just dropping the calculation of utilities and instead looking at more general patterns of consequences produces plenty of fairly acceptable systems (like prioritarianism, where one should give extra weight to the people in most need). Rule consequentialism (act according to rules that have good consequences) can approximate deontological ethics to an arbitrary degree." Happiness is not and objective doctrine and no one has the moral authorship to decide what is happiness. If so, I'd be very, very unhappy here in Texas, the US, and planet Earth. But the outcomes and consequences of an act is highly valuable from a design perspective. This relates to the elegance of an act or action, which takes us outside the philosophical and political realms and into functionalism. [cut] "I think the transhumanist goal ought to be to establish a bunch of our concepts as valid and relevant: morphological freedom, Freita's volitional normative health concept, and enhancement as a valid medical pursuit." Yes. But I'd have to revise morphological freedom to make it both a positive right and a negative right. I know Max and you are more knowledgeable than me on this theory, but the average person needs to easily understand this concept and not be turned of by a misunderstanding of a negative right. I also think Freitas' volitional normative health idea is spot on, and enhancement needs to be part of that volitional norm. The sooner we address aspects of a transitional and transformative norm, the better. Best, Natasha From spike66 at att.net Thu Jan 20 17:44:04 2011 From: spike66 at att.net (spike) Date: Thu, 20 Jan 2011 09:44:04 -0800 Subject: [ExI] mass transit again In-Reply-To: <20110120111639.GF23560@leitl.org> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> <20110120111639.GF23560@leitl.org> Message-ID: <004201cbb8c9$a182c3f0$e4884bd0$@att.net> ... On Behalf Of Eugen Leitl ... On Thu, Jan 20, 2011 at 02:02:39AM -0800, Samantha Atkins wrote: >> The main problem with most mass transit is that it is less fuel efficient by far than even private cars with only one person in them... >Apart from higher street density (transport capacity) issues this strikes me as extremely inplausible. Do you have references for this?...-- Eugen* Leitl Samantha is one of the locals; she and I regularly witness a phenomenon which constantly reinforces this notion: buses and trains with exactly two people on board, with one of these at the controls. We regularly see the electric trains that the voters just had to have (to save the planet) backing up traffic for 300 meters, with exactly two persons on board, one at the controls. Perhaps the voters failed to vote into effect a law *requiring* the silly proles to actually use these expensive public transit systems. We have a most wonderful fleet of fuel cell buses, belching pristine sparkly white clouds of steam as they motor about the bay, nearly devoid of actual passengers, saving the planet with each puff of non-carbon dioxide containing emissions. Commonly known as zero emissions vehicles, we engineering types prefer to call them emissions-elsewhere traffic blockers. They are so very efficient at emitting emissions in Nevada somewhere, while devouring all those carbon-containing tax dollars. A study was done for the San Jose fuel cell buses using total investment vs total results. The local news reported it as 51 dollars per mile. I don't think that was cost per passenger mile, so it is entirely possible that the figure would need to be divided by two or even three to derive dollars per passenger mile, which would put it at a much more palatable 12 dollars per passenger mile, which is cheaper than a fleet of three taxis to haul those three proles. In any case, the EEV bus project will forever be a poster child for our tax dollars at play, a symbol of why California is in such desperate trouble financially, and how we got here. The state was functionally deriving its own private version of carbon cap and trade, basically acting as its own country, attempting to save the planet while neglecting to save itself. Watch and wait, keep an eye on the headlines. spike From pharos at gmail.com Thu Jan 20 18:12:07 2011 From: pharos at gmail.com (BillK) Date: Thu, 20 Jan 2011 18:12:07 +0000 Subject: [ExI] mass transit again In-Reply-To: <004201cbb8c9$a182c3f0$e4884bd0$@att.net> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> <20110120111639.GF23560@leitl.org> <004201cbb8c9$a182c3f0$e4884bd0$@att.net> Message-ID: On Thu, Jan 20, 2011 at 5:44 PM, spike wrote: > Samantha is one of the locals; she and I regularly witness a phenomenon > which constantly reinforces this notion: buses and trains with exactly two > people on board, with one of these at the controls. ?We regularly see the > electric trains that the voters just had to have (to save the planet) > backing up traffic for 300 meters, with exactly two persons on board, one at > the controls. ?Perhaps the voters failed to vote into effect a law > *requiring* the silly proles to actually use these expensive public transit > systems. > > That's probably where they went wrong. It's different in other places. See: In London, passengers aren't allowed to travel on the roof of the buses, no matter how crowded it gets. (Although some youths do it for a dare). BillK From spike66 at att.net Thu Jan 20 18:05:23 2011 From: spike66 at att.net (spike) Date: Thu, 20 Jan 2011 10:05:23 -0800 Subject: [ExI] smorebrod Message-ID: <004901cbb8cc$9c500480$d4f00d80$@att.net> On Thu, Jan 20, 2011 at 03:33:47PM +0100, Eugen Leitl wrote: > > There's a whole smorebrod buffet of hardware to take before you That'd be Sm?rg?sbord, sorry. Eugen* Leitl ... Actually we liked smorebrod better. It reminds one of Playboy founder and jillionaire Hugh Hefner, ever free to choose a new playmate from a seemingly unlimited selection of young beauties, or perhaps an Amish martyr unsatisfied with his 73 virgins. These want smorebrod. spike From natasha at natasha.cc Thu Jan 20 19:12:09 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 20 Jan 2011 14:12:09 -0500 Subject: [ExI] smorebrod In-Reply-To: <004901cbb8cc$9c500480$d4f00d80$@att.net> References: <004901cbb8cc$9c500480$d4f00d80$@att.net> Message-ID: <20110120141209.lkqowue0w04sco4k@webmail.natasha.cc> Quoting spike : > > On Thu, Jan 20, 2011 at 03:33:47PM +0100, Eugen Leitl wrote: >> >> There's a whole smorebrod buffet of hardware to take before you > > That'd be Sm?rg?sbord, sorry. Eugen* Leitl ... > > > Actually we liked smorebrod better. It reminds one of Playboy founder and > jillionaire Hugh Hefner, ever free to choose a new playmate from a seemingly > unlimited selection of young beauties, or perhaps an Amish martyr > unsatisfied with his 73 virgins. These want smorebrod. I was at the manson for a party and Hefner, who is authentically charming, would have agreed to indulge smorebrod -- but not at the moment, as he is quite happy with one dish. Natasha > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From natasha at natasha.cc Thu Jan 20 19:15:37 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 20 Jan 2011 14:15:37 -0500 Subject: [ExI] smorebrod In-Reply-To: <20110120141209.lkqowue0w04sco4k@webmail.natasha.cc> References: <004901cbb8cc$9c500480$d4f00d80$@att.net> <20110120141209.lkqowue0w04sco4k@webmail.natasha.cc> Message-ID: <20110120141537.qqeulbd2scow8g84@webmail.natasha.cc> oops. Mansion. But I suppose Spike would say that the manson was okay since the mans on top of it. Quoting natasha at natasha.cc: > Quoting spike : > >> >> On Thu, Jan 20, 2011 at 03:33:47PM +0100, Eugen Leitl wrote: >>> >>> There's a whole smorebrod buffet of hardware to take before you >> >> That'd be Sm?rg?sbord, sorry. Eugen* Leitl ... >> >> >> Actually we liked smorebrod better. It reminds one of Playboy founder and >> jillionaire Hugh Hefner, ever free to choose a new playmate from a seemingly >> unlimited selection of young beauties, or perhaps an Amish martyr >> unsatisfied with his 73 virgins. These want smorebrod. > > I was at the manson for a party and Hefner, who is authentically > charming, would have agreed to indulge smorebrod -- but not at the > moment, as he is quite happy with one dish. > > Natasha > >> >> spike >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From alfio.puglisi at gmail.com Thu Jan 20 19:11:00 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 20 Jan 2011 20:11:00 +0100 Subject: [ExI] mass transit again In-Reply-To: <004201cbb8c9$a182c3f0$e4884bd0$@att.net> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> <20110120111639.GF23560@leitl.org> <004201cbb8c9$a182c3f0$e4884bd0$@att.net> Message-ID: On Thu, Jan 20, 2011 at 6:44 PM, spike wrote: > ... On Behalf Of Eugen Leitl > ... > > On Thu, Jan 20, 2011 at 02:02:39AM -0800, Samantha Atkins wrote: > > >> The main problem with most mass transit is that it is less fuel > efficient > by far than even private cars with only one person in them... > > >Apart from higher street density (transport capacity) issues this strikes > me as extremely inplausible. Do you have references for this?...-- Eugen* > Leitl > > Samantha is one of the locals; she and I regularly witness a phenomenon > which constantly reinforces this notion: buses and trains with exactly two > people on board, with one of these at the controls. We regularly see the > electric trains that the voters just had to have (to save the planet) > backing up traffic for 300 meters, with exactly two persons on board, one > at > the controls. If you put electric trains where they are needed, they will be used. In my city a light rail electric line opened some months ago, after an enormous amount of debates and newspaper articles about its uselessness and how it would always be empty. It has now about 1 million riders / month, which is an average of 86 per journey, with a train every five minutes. I believe that this number is understated, since when I have taken it, it was always packed, no matter what time it was (here's an image: http://commons.wikimedia.org/wiki/File:Test_of_tramway_of_Florence_2.png ) We have a most wonderful fleet of fuel cell buses, belching pristine sparkly > white clouds of steam as they motor about the bay, nearly devoid of actual > passengers, saving the planet with each puff of non-carbon dioxide > containing emissions. Commonly known as zero emissions vehicles, we > engineering types prefer to call them emissions-elsewhere traffic blockers. > They are so very efficient at emitting emissions in Nevada somewhere, while > devouring all those carbon-containing tax dollars. > Fuel cell? Sounds too expensive. Just use natural gas powered buses. The tech is simple, and burning is much cleaner than diesel ones. Common opinion is that a natural gas internal combustion engine is less powerful than an equivalent gas or diesel, but the local bus company only converted the largest buses to natural gas, leaving the rest as diesels. And they don't seem to suffer from lack of pick-up, on the contrary... Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Jan 20 19:27:36 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 20 Jan 2011 14:27:36 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D38201E.8040703@aleph.se> References: <4D38201E.8040703@aleph.se> Message-ID: <4D388CA8.60907@lightlink.com> Anders Sandberg wrote: > One of the things that struck me during our Winter Intelligence > workshop on intelligence explosions was how confident some people > were about the speed of recursive self-improvement of AIs, brain > emulation collectivies or economies. Some thought it was going to be > fast in comparision to societal adaptation and development timescales > (creating a winner takes all situation), some thought it would be > slow enough for multiple superintelligent agents to emerge. This > issue is at the root of many key questions about the singularity (one > superintelligence or many? how much does friendliness matter?) > > It would be interesting to hear this list's take on it: what do you > think is the key limiting factor for how fast intelligence can > amplify itself? > > Some factors that have been mentioned in past discussions: Before we get to the specific factors you list, some general points. A) Although we can try our best to understand how an intelligence explosion might happen, the truth is that there are too many interactions between the factors for any kind of reliable conclusion to be reached. This is a complex-system interaction in which even the tiniest, least-anticipated factor may turn out to be the rate-limiting step (or, conversely, the spark that starts the fire). B) There are two types of answer that can be given. One is based on quite general considerations. The second has to be based on what I, as an AGI researcher, believe I understand about the way in which AGI will be developed. I will keep back the second one for the end, so people can kick that one to the ground as a separate matter. C) There is one absolute prerequisite for an intelligence explosion, and that is that an AGI becomes smart enough to understand its own design. If it can't do that, there is no explosion, just growth as usual. I do not believe it makes sense to talk about what happens *before* that point as part of the "intelligence explosion". D) When such a self-understanding system is built, it is unlikely that it will be the creation of a lone inventor who does it in their shed at the bottom of the garden, without telling anyone. Very few of the "lone inventor" scenarios (the Bruce Wayne scenarios) are plausible. E) Most importantly, the invention of a human-level, self-understanding AGI would not lead to a *subsequent* period (we can call it the "explosion period") in which the invention just sits on a shelf with nobody bothering to pick it up. A situation in which it is just one quiet invention alongside thousands of others, unrecognized and not generally believed. F) When the first human-level AGI is developed, it will either require a supercomputer-level of hardware resources, or it will be achievable with much less. This is significant, because world-class supercomputer hardware is not something that can quickly be duplicated on a large scale. We could make perhaps hundreds of such machines, with a massive effort, but probably not a million of them in a couple of years. G) There are two types of intelligence speedup: one due to faster operation of an intelligent system (clock speed) and one due to improvment in the type of mechanisms that implement the thought processes. Obviously both could occur at once, but the latter is far more difficult to achieve, and may be subject to fundamental limits that we do not understand. Speeding up the hardware, on the other hand, has been going on for a long time and is more mundane and reliable. Notice that both routes lead to greater "intelligence", because even a human level of thinking and creativity would be more effective if it were happening (say) a thousand times faster than it does now. ********************************************* Now the specific factors you list. 1) Economic growth rate One consequence of the above reasoning is that economic growth rate would be irrelevant. If an AGI were that smart, it would already be obvious to many that this was a critically important technology, and no effort would be spared to improve the AGI "before the other side does". Entire national economies would be sublimated to the goal of developing the first superintelligent machine. In fact, economic growth rate would be *defined* by the intelligence explosion projects taking place around the world. 2) Investment availability The above reasoning also applies to this case. Investment would be irrelevant because the players would either be governments or frenzied bubble-investors, and they would be pumping it in as fast as money could be printed. 3) Gathering of empirical information (experimentation, interacting with an environment). So, this is about the fact that the AGI would need to do some experimentation and interaction with the environment. For example, if it wanted to reimplement itself on faster hardware (the quickest route to an intelligence increase) it would probably have to set up its own hardware research laboratory and gather new scientific data by doing experiments, some of which would go at their own speed. The question is: how much of the research can be sped up by throwing large amounts of intelligence at it? This is the parallel-vs-serial problem (i.e. you can't make a baby nine times quicker by asking nine women to be pregnant for one month). This is not a factor that I believe we can understand very well ahead of time, because some experiments that look as though they require fundamentally slow physical processes -- like waiting for a silicon crystal to grow, so we can study a chip fabrication mechanism -- may actually be dependent on smartness, in ways that we cannot anticipate. It could be that instead of waiting for the chips to grow at their own speed, the AGI can do clever micro-experiments that give the same information faster. This factor invites unbridled speculation and opinion, to such an extent that there are more opinions than facts. However, we can make one observation that cuts through the arguments. Of all the factors that determine how fast empirical scientific research can be carried out, we know that intelligence and thinking speed of the scientist themselves *must* be one of the most important, today. It seems likely that in our present state of technological sophistication, advanced research projects are limited by the availability and cost of intelligent and experienced scientists. But if research labs around the world have stopped throwing *more* scientists at problems they want to solve, because the latter cannot be had, or are too expensive, would it be likely that the same research labs ar *also*, quite independently, at the limit for the physical rate at which experiments can be carried out? It seems very unlikely that both of these limits have been reached at the same time, because they cannot be independently maximized. (This is consistent with anecdotal reports: companies complain that research staff cost a lot, and that scientists are in short supply: they don't complain that nature is just too slow). In that case, we should expect that any experiment-speed limits lie up the road, out of sight. We have not reached them yet. So, for that reason, we cannot speculate about exactly where those limits are. (And, to reiterate: we are talking about the limits that hit us when we can no longer do an end-run around slow experiments by using our wits to invent different, quicker experiments that give the same information). Overall, I think that we do not have concrete reasons to believe that this will be a fundamental limit that stops the intelligence explosion from taking an AGI from H to (say) 1,000 H. Increases in speed within that range (for computer hardware, for example) are already expected, even without large numbers of AGI systems helping out, so it would seem to me that physical limits, by themselves, would not stop an explosion that went from I = H to I = 1,000 H. 4) Software complexity By this I assume you mean the complexity of the software that an AGI must develop in order to explode its intelligence. The premise is that even an AGI with self-knowledge finds it hard to cope with the fabulous complexity of the problem of improving its own software. This seems implausible as a limiting factor, because the AGI could always leave the software alone and develop faster hardware. So long as the AGI can find a substrate that gives it (say) 1,000 H thinking-speed, we have the possibility for a significant intelligence explosion. Arguing that software complexity will stop the initial human level AGI from being built is a different matter. It may stop an intelligence explosion from happening by stopping the precursor events, but I take that to be a different type of question. 5) Hardware demands vs. available hardvare I have already mentioned, above, that a lot depends on whether the first AGI requires a large (world-class) supercomputer, or whether it can be done on something much smaller. This may limit the initial speed of the explosion, because one of the critical factors would be the sheer number of copies of the AGI that can be created. Why is this a critical factor? Because the ability to copy the intelligence of a fully developed, experienced AGI is one of the big new factors that makes the intelligence explosion what it is: you cannot do this for humans, so human geniuses have to be rebuilt from scratch every generation. So, the initial requirement that an AGI be a supercomputer would make it hard to replicate the AGI on a huge scale, because the replication rate would (mostly) determine the intelligence-production rate. However, as time went on, the rate of replication would grow, as hardware costs went down at their usual rate. This would mean that the *rate* of arrival of high-grade intelligence would increase in the years following the start of this process. That intelligence would then be used to improve the design of the AGIs (at the very least, increasing the rate of new-and-faster-hardware production), which would have a positive feedback effect on the intelligence production rate. So I would see a large-hardware requirement for the first AGI as something that would dampen the initial stages of the explosion. But the positive feedback after that would eventually lead to an explosion anyway. If, on the other hand, the initial hardware requirements are modest (as they very well could be), the explosion would come out of the gate at full speed. 6) Bandwidth Alongside the aforementioned replication of adult AGIs, which would allow the multiplication of knowledge in ways not currently available in humans, there is also the fact that AGIs could communicate with one another using high-bandwidth channels. This inter-AGI bandwidth. As a separate issue, there might be bandwidth limits inside an AGI, which might make it difficult to augment the intelligence of a single system. This is intra-AGI bandwidth. The first one - inter-AGI bandwidth - is probably less of an issue for the intelligence explosion, because there are so many research issues that can be split into separably-addressible components, that I doubt we would find AGIs sitting around with no work to do on the intelligence amplification project, on account of waiting for other AGIs to get a free channel to talk to them. Intra-AGI bandwidth is another matter entirely. There could be limitations on the IQ of an AGI -- for example if working memory limitations (the magic number seven, plus or minus two) turned out to be caused by connectivity/bandwidth limits within the system. However, notice that such factors may not inhibit the initial phase of an explosion, because the clock speed, not IQ, of the AGI may be improvable by several orders of magnitude before bandwidth limits kick in. The reasoning behind this is the observation that neural signal speed is so slow. If a brain-like system (not necessarily a whole brain emulation, but just something that replicated the high-level functionality) could be built using components that kept the same type of processing demands, and the same signal speed. In that kind of system there would then be plenty of room to develop faster signal speeds and increase the intelligence of the system. Overall, this is, I believe, the factor that is most likely to cause trouble. However, much research is needed before much can be said with certainty. Most importantly, this depends on *exactly* what type of AGI is being built. Making naive assumptions about the design can lead to false conclusions. 7) Lightspeed lags This is not much different than bandwidth limits, in terms of the effect it has. It would be a significant problem if the components of the machine were physically so far apart that massive amounts of data (by assumption) were delivered with a significant delay. By itself, again, this seems unlikely to be a problem in the initial few orders of magnitude of the explosion. Again, the argument derives from what we know about the brain. We know that the brain's hardware was chosen due to biochemical constraints. We are carbon-based, not silicon-and-copper-based, so, no chips in the head, only pipes filled with fluid and slow molecular gates in the walls of the pipes. But if nature used the pipes-and-ion-channels approach, there seems to be plenty of scope for speedup with a transition to silicon and copper (and never mind all the other more exotic computing substrates on the horizon). If that transition produced a 1,000x speedup, this would be an explosion worthy of the name. The only reason this might not happen would be if, for some reason, the brain is limited on two fronts simultaneously: both by the carbon implementation and by the fact that bigger brains cause disruptive light-speed delays. Or, that all non-carbon-implementation of the brain take us up close to the lightspeed limit before we get much of a speedup over the brain. Neither of these ideas seem plausible. In fact, they both seem to me to require a coincidence of limiting factors (two limiting factors just happening to kick in at exactly the same level), which I find deeply implausible. ***************** Finally, some comments about approaches to AGI that would affect the answer to this question about the limiting factors for an intelligence explosion. I have argued consistently, over the last several years, that AI research has boxed itself into a corner due to a philosophical commitment to the power of formal systems. Since I first started arguing this case, Nassim Nicholas Taleb (The Black Swan) coined the term "Ludic Fallacy" to describe a general form of exactly the issue I have been describing. I have framed this in the context of something that I called the "complex systems problem", the details of which are not important here, although the conclusion is highly relevant. If the complex systems problem is real, then there is a very large class of AGI system designs that are (a) almost completely ignored at the moment, and (b) very likely to contain true intelligent systems, and (c) quite possibly implementable on relatively modest hardware. This class of systems is being ignored for sociology-of-science reasons (the current generation of AI researchers would have to abandon their deepest loves to be able to embrace such systems, and since they are fallible humans, rather than objectively perfect scientists, this is anathema). So, my most general answer to this question about the rate of the intelligence explosion is that, in fact, it depends crucially on the kind of AGI systems being considered. If the scope is restricted to the current approaches, we might never actually reach human level intelligence, and the questio is moot. But if this other class of (complex) AGI systems did start being built, we might find that the hardware requirements were relatively modest (much less than supercomputer size), and the software complexity would also not be that great. As far as I can see, most of the above-mentioned limitations would not be significant within the first few orders of magnitude of increase. And, the beginning of the slope could be in the relatively near future, rather than decades away. But that, as usual, is just the opinion of an AGI researcher. No need to take *that* into account in assessing the factors. ;-) Richard Loosemore Mathematical and Physical Sciences, Wells College Aurora, NY 13026 USA From mbb386 at main.nc.us Thu Jan 20 20:53:36 2011 From: mbb386 at main.nc.us (MB) Date: Thu, 20 Jan 2011 15:53:36 -0500 Subject: [ExI] mass transit again In-Reply-To: References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> <20110120111639.GF23560@leitl.org> <004201cbb8c9$a182c3f0$e4884bd0$@att.net> Message-ID: <0728a64aba406775f6ca2542bf841af7.squirrel@www.main.nc.us> > > If you put electric trains where they are needed, they will be used. In my > city a light rail electric line opened some months ago, after an enormous > amount of debates and newspaper articles about its uselessness and how it > would always be empty. It has now about 1 million riders / month, which is > an average of 86 per journey, with a train every five minutes. I believe > that this number is understated, since when I have taken it, it was always > packed, no matter what time it was (here's an image: > http://commons.wikimedia.org/wiki/File:Test_of_tramway_of_Florence_2.png ) > That is very nice! It runs smoothly and does not shake the cathedral and other lovely old buildings? Running every 5 minutes sounds useful. Question: if one is coming in from the countryside, where can one safely park an automobile in order to use the train *in* town? Years ago I took the train to work, parking near the station in the small town and walking from the station in the city to the office. The problem in my present town is the little bus to the city runs one time in the morning and one time in the evening. If one's work schedule doesn't happen to match the bus schedule then bad luck for you. Every now and then the schedule has some readjustment so some riders cannot any longer use the bus while a few new ones can. Not very helpful. Regards, MB From pharos at gmail.com Thu Jan 20 20:25:14 2011 From: pharos at gmail.com (BillK) Date: Thu, 20 Jan 2011 20:25:14 +0000 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D388CA8.60907@lightlink.com> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> Message-ID: On Thu, Jan 20, 2011 at 7:27 PM, Richard Loosemore wrote: > > A) ?Although we can try our best to understand how an intelligence > explosion might happen, the truth is that there are too many interactions > between the factors for any kind of reliable conclusion to be reached. This > is a complex-system interaction in which even the tiniest, least-anticipated > factor may turn out to be the rate-limiting step (or, conversely, the spark > that starts the fire). > > C) ?There is one absolute prerequisite for an intelligence explosion, > and that is that an AGI becomes smart enough to understand its own > design. ?If it can't do that, there is no explosion, just growth as usual. > ?I do not believe it makes sense to talk about what happens *before* that > point as part of the "intelligence explosion". > These two remarks strike me as being very significant. *How* does it understand its own design? Suppose the AGI has a new design for subroutine 453741. How does it know whether it is *better*? First step is: How does the AGI make it bug-free? Then - Is it better in some circumstances and not others? Does it lessen the benefits of some other algorithms? Does it completely block some avenues of further design? Should this improvement be implemented before or after other changes? And then the AGI is trying to do this for hundreds of thousands of routines. To say that the AGI knows best and will solve all these problems is circular logic. First make an AGI, but you need an AGI to solve these problems........ BillK From eugen at leitl.org Thu Jan 20 21:31:50 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 20 Jan 2011 22:31:50 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D388CA8.60907@lightlink.com> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> Message-ID: <20110120213150.GL23560@leitl.org> On Thu, Jan 20, 2011 at 02:27:36PM -0500, Richard Loosemore wrote: > C) There is one absolute prerequisite for an intelligence explosion, > and that is that an AGI becomes smart enough to understand its own > design. If it can't do that, there is no explosion, just growth as Unnecessary for darwinian systems. The process is dumb as dirt, but it's working quite well. > usual. I do not believe it makes sense to talk about what happens If you define the fitness function, and have ~ms generation turaround it's not quite as usual anymore. > *before* that point as part of the "intelligence explosion". > > D) When such a self-understanding system is built, it is unlikely that I don't think that a self-understanding system is at all possible. Or, rather, it would perform better than a blind optimization. > it will be the creation of a lone inventor who does it in their shed at > the bottom of the garden, without telling anyone. Very few of the "lone > inventor" scenarios (the Bruce Wayne scenarios) are plausible. I agree it's probably a large scale effort, initially. > E) Most importantly, the invention of a human-level, self-understanding I wonder where the self-understanding meme is coming from. It's certainly pervasive enough. > AGI would not lead to a *subsequent* period (we can call it the > "explosion period") in which the invention just sits on a shelf with > nobody bothering to pick it up. A situation in which it is just one > quiet invention alongside thousands of others, unrecognized and not > generally believed. > > F) When the first human-level AGI is developed, it will either require > a supercomputer-level of hardware resources, or it will be achievable Bootstrap takes many orders of mangnitude more resources than required for operation. Even before optimization happens. > with much less. This is significant, because world-class supercomputer > hardware is not something that can quickly be duplicated on a large > scale. We could make perhaps hundreds of such machines, with a massive About 30 years from now, TBit/s photonic networking is the norm. The separation between core and edge is gone, and inshallah, so well policy enforcement. Every city block is a supercomputer, then. > effort, but probably not a million of them in a couple of years. There are a lot of very large datacenters with excellent network cross-section, even if you disregard large screen TVs and game consoles on >GBit/s residential networks. > G) There are two types of intelligence speedup: one due to faster > operation of an intelligent system (clock speed) and one due to Clocks don't scale, eventually you'll settle for local asynchronous, with large-scale loosely coupled oscillators synchronizing. > improvment in the type of mechanisms that implement the thought > processes. Obviously both could occur at once, but the latter is far How much random biochemistry tweaking would improve dramatically on the current CNS performance? As a good guess, none. So once you've reimplemented the near-optimal substrate, dramatic improvements are over. This isn't software, this is direct implmenentation of neural computational substrate in as thin hardware layer this universe allows us. > more difficult to achieve, and may be subject to fundamental limits that > we do not understand. Speeding up the hardware, on the other hand, has I disagree, the limits are that of computational physics, and these are fundamentally simple. > been going on for a long time and is more mundane and reliable. Notice > that both routes lead to greater "intelligence", because even a human > level of thinking and creativity would be more effective if it were > happening (say) a thousand times faster than it does now. Run a dog for a gigayear, still no general relativity. > > ********************************************* > > Now the specific factors you list. > > 1) Economic growth rate > > One consequence of the above reasoning is that economic growth rate > would be irrelevant. If an AGI were that smart, it would already be Any technology allowing you to keep a mind in a box will allow you to make a pretty good general assembler. The limits of such technology are energy and matter fluxes. Buying and shipping widgets is only a constraining factor in the physical layer bootstrap (if at all necessary, 30 years hence all-purpose fabrication has a pretty small footprint). > obvious to many that this was a critically important technology, and no > effort would be spared to improve the AGI "before the other side does". > Entire national economies would be sublimated to the goal of developing > the first superintelligent machine. This would be fun to watch. > In fact, economic growth rate would be *defined* by the intelligence > explosion projects taking place around the world. > > > 2) Investment availability > > The above reasoning also applies to this case. Investment would be > irrelevant because the players would either be governments or frenzied > bubble-investors, and they would be pumping it in as fast as money could > be printed. > > > 3) Gathering of empirical information (experimentation, interacting with > an environment). > > So, this is about the fact that the AGI would need to do some > experimentation and interaction with the environment. For example, if If you have enough crunch to run a mind, you have enough crunch to run really really really good really fast models of the universe. > it wanted to reimplement itself on faster hardware (the quickest route > to an intelligence increase) it would probably have to set up its own > hardware research laboratory and gather new scientific data by doing > experiments, some of which would go at their own speed. You're thinking like a human. > The question is: how much of the research can be sped up by throwing > large amounts of intelligence at it? This is the parallel-vs-serial > problem (i.e. you can't make a baby nine times quicker by asking nine > women to be pregnant for one month). It's a good question. I have a hunch (no proof, nothing) that the current way of doing reality modelling is extremely inefficient. Currently, experimenters have every reason to sneer at modelers. Currently. > This is not a factor that I believe we can understand very well ahead of > time, because some experiments that look as though they require > fundamentally slow physical processes -- like waiting for a silicon > crystal to grow, so we can study a chip fabrication mechanism -- may > actually be dependent on smartness, in ways that we cannot anticipate. > It could be that instead of waiting for the chips to grow at their own > speed, the AGI can do clever micro-experiments that give the same > information faster. Any intelligence worth its salt would see that it would use computational chemistry to bootstrap molecular manufacturing. The grapes could be hanging pretty low there. > This factor invites unbridled speculation and opinion, to such an extent > that there are more opinions than facts. However, we can make one > observation that cuts through the arguments. Of all the factors that > determine how fast empirical scientific research can be carried out, we > know that intelligence and thinking speed of the scientist themselves > *must* be one of the most important, today. It seems likely that in our > present state of technological sophistication, advanced research > projects are limited by the availability and cost of intelligent and > experienced scientists. You can also vastly speed up the rate of prototyping by scaling down and proper tooling. You see first hints of that in lab automation, particularly microfluidics. Add ability to fork off dedicated investigators at the drop of a hat, and things start happening, and in a positive-feedback loop. > But if research labs around the world have stopped throwing *more* > scientists at problems they want to solve, because the latter cannot be > had, or are too expensive, would it be likely that the same research > labs ar *also*, quite independently, at the limit for the physical rate > at which experiments can be carried out? It seems very unlikely that > both of these limits have been reached at the same time, because they > cannot be independently maximized. (This is consistent with anecdotal > reports: companies complain that research staff cost a lot, and that > scientists are in short supply: they don't complain that nature is just > too slow). Most monkeys rarely complain that they're monkeys. (Resident monkeys excluded, of course). > In that case, we should expect that any experiment-speed limits lie up > the road, out of sight. We have not reached them yet. I, a mere monkey, can easily imagine two orders of magnitude speed improvements. Which, of course, result in a positive autofeedback loop. > So, for that reason, we cannot speculate about exactly where those > limits are. (And, to reiterate: we are talking about the limits that > hit us when we can no longer do an end-run around slow experiments by I do not think you will need slow experiments. Not slow by our standards, at least. > using our wits to invent different, quicker experiments that give the > same information). > > Overall, I think that we do not have concrete reasons to believe that > this will be a fundamental limit that stops the intelligence explosion > from taking an AGI from H to (say) 1,000 H. Increases in speed within > that range (for computer hardware, for example) are already expected, > even without large numbers of AGI systems helping out, so it would seem > to me that physical limits, by themselves, would not stop an explosion > that went from I = H to I = 1,000 H. Speed limits (assuming classical computation) do not begin to take hold before 10^6, and maybe even 10^9 (this is more difficult, and I do not have a good model of wetware at 10^9 speedup to current wallclock). > > 4) Software complexity > > By this I assume you mean the complexity of the software that an AGI > must develop in order to explode its intelligence. The premise is > that even an AGI with self-knowledge finds it hard to cope with the > fabulous complexity of the problem of improving its own software. Software, that's pretty steampunk of you. > This seems implausible as a limiting factor, because the AGI could > always leave the software alone and develop faster hardware. So long as There is no difference between hardware and software (state) as far as advanced cognition is concerned. Once you've covered the easy gains in first giant co-evolution steps further increases are much more modest, and much more expensive. > the AGI can find a substrate that gives it (say) 1,000 H thinking-speed, We should be able to do 10^3 with current technology. > we have the possibility for a significant intelligence explosion. Yeah, verily. > Arguing that software complexity will stop the initial human level AGI If it hurts, stop doing it. > from being built is a different matter. It may stop an intelligence > explosion from happening by stopping the precursor events, but I take > that to be a different type of question. > > > 5) Hardware demands vs. available hardvare > > I have already mentioned, above, that a lot depends on whether the first > AGI requires a large (world-class) supercomputer, or whether it can be > done on something much smaller. Current supercomputers are basically consumer devices or embeddeds on steroids, networked on a large scale. > This may limit the initial speed of the explosion, because one of the > critical factors would be the sheer number of copies of the AGI that can Unless the next 30 years won't see the same development as the last ones, then substrate is the least of your worries. > be created. Why is this a critical factor? Because the ability to copy > the intelligence of a fully developed, experienced AGI is one of the big > new factors that makes the intelligence explosion what it is: you > cannot do this for humans, so human geniuses have to be rebuilt from > scratch every generation. > > So, the initial requirement that an AGI be a supercomputer would make it > hard to replicate the AGI on a huge scale, because the replication rate > would (mostly) determine the intelligence-production rate. Nope. > However, as time went on, the rate of replication would grow, as Look, even now we know what we would need, but you can't buy it. But you can design it, and two weeks from now you'll get your first prototypes. That's today, 30 years the prototypes might be hours away. And do you need prototypes to produce a minor variation on a stock design? Probably not. > hardware costs went down at their usual rate. This would mean that the > *rate* of arrival of high-grade intelligence would increase in the years > following the start of this process. That intelligence would then be > used to improve the design of the AGIs (at the very least, increasing > the rate of new-and-faster-hardware production), which would have a > positive feedback effect on the intelligence production rate. > > So I would see a large-hardware requirement for the first AGI as > something that would dampen the initial stages of the explosion. But Au contraire, this planet is made from swiss cheese. Annex at your leisure. > the positive feedback after that would eventually lead to an explosion > anyway. > > If, on the other hand, the initial hardware requirements are modest (as > they very well could be), the explosion would come out of the gate at > full speed. > > > > > 6) Bandwidth > > Alongside the aforementioned replication of adult AGIs, which would > allow the multiplication of knowledge in ways not currently available in > humans, there is also the fact that AGIs could communicate with one > another using high-bandwidth channels. This inter-AGI bandwidth. Fiber is cheap. Current fiber comes in 40 or 100 GBit/s parcels. 30 years hence bandwidth will be probably adequate. > > As a separate issue, there might be bandwidth limits inside an AGI, > which might make it difficult to augment the intelligence of a single > system. This is intra-AGI bandwidth. Even now bandwidth growth is far in excess of computation growth. Once you go embedded memory, you're more closely matched. But still the volume/surface (you only have to communicate surface state) ratio indicated the local communication is the bottleneck. > The first one - inter-AGI bandwidth - is probably less of an issue for > the intelligence explosion, because there are so many research issues > that can be split into separably-addressible components, that I doubt we > would find AGIs sitting around with no work to do on the intelligence > amplification project, on account of waiting for other AGIs to get a > free channel to talk to them. You're making it sound so planned, and orderly. > Intra-AGI bandwidth is another matter entirely. There could be > limitations on the IQ of an AGI -- for example if working memory > limitations (the magic number seven, plus or minus two) turned out to be > caused by connectivity/bandwidth limits within the system. So many assumptions. > However, notice that such factors may not inhibit the initial phase of > an explosion, because the clock speed, not IQ, of the AGI may be There is no clock, literally. Operations/volume, certainly. > improvable by several orders of magnitude before bandwidth limits kick > in. The reasoning behind this is the observation that neural signal Volume/surface ratio is on your side here. > speed is so slow. If a brain-like system (not necessarily a whole brain > emulation, but just something that replicated the high-level > functionality) could be built using components that kept the same type > of processing demands, and the same signal speed. In that kind of > system there would then be plenty of room to develop faster signal > speeds and increase the intelligence of the system. > > Overall, this is, I believe, the factor that is most likely to cause > trouble. However, much research is needed before much can be said with > certainty. > > Most importantly, this depends on *exactly* what type of AGI is being > built. Making naive assumptions about the design can lead to false > conclusions. Just think of it as a realtime simulation of a given 3d physical process (higher dimensions are mapped to 3d, so they don't figure). Suddenly things are simple. > > > 7) Lightspeed lags > > This is not much different than bandwidth limits, in terms of the effect > it has. It would be a significant problem if the components of the > machine were physically so far apart that massive amounts of data (by > assumption) were delivered with a significant delay. Vacuum or glass is a FIFO, and you don't have to wait for ACKs. Just fire stuff bidirectionally, and deal with transmission errors by graceful degradation. > By itself, again, this seems unlikely to be a problem in the initial few > orders of magnitude of the explosion. Again, the argument derives from > what we know about the brain. We know that the brain's hardware was > chosen due to biochemical constraints. We are carbon-based, not > silicon-and-copper-based, so, no chips in the head, only pipes filled > with fluid and slow molecular gates in the walls of the pipes. But if > nature used the pipes-and-ion-channels approach, there seems to be > plenty of scope for speedup with a transition to silicon and copper (and > never mind all the other more exotic computing substrates on the > horizon). If that transition produced a 1,000x speedup, this would be > an explosion worthy of the name. Why so modest? > The only reason this might not happen would be if, for some reason, the > brain is limited on two fronts simultaneously: both by the carbon > implementation and by the fact that bigger brains cause disruptive The brain is a slow, noisy (but one using noise to its own advantage) metabolically constrained system which burns most of its metabolism for homeostasis purposes. It doesn't take a genius to sketch the obvious ways in which you can reimplement that design, taking advantages and removing disadvantages. > light-speed delays. Or, that all non-carbon-implementation of the brain > take us up close to the lightspeed limit before we get much of a speedup We here work with ~120 m/s, not 120 Mm/s. Reduce feature size by an order of magnitude or two, and switching times of ns and ps instead of ms, and c is not that big a limitation anymore. > over the brain. Neither of these ideas seem plausible. In fact, they > both seem to me to require a coincidence of limiting factors (two > limiting factors just happening to kick in at exactly the same level), > which I find deeply implausible. > > > ***************** > > Finally, some comments about approaches to AGI that would affect the > answer to this question about the limiting factors for an intelligence > explosion. > > I have argued consistently, over the last several years, that AI > research has boxed itself into a corner due to a philosophical > commitment to the power of formal systems. Since I first started Very much so. > arguing this case, Nassim Nicholas Taleb (The Black Swan) coined the > term "Ludic Fallacy" to describe a general form of exactly the issue I > have been describing. > > I have framed this in the context of something that I called the > "complex systems problem", the details of which are not important here, > although the conclusion is highly relevant. > > If the complex systems problem is real, then there is a very large class > of AGI system designs that are (a) almost completely ignored at the > moment, and (b) very likely to contain true intelligent systems, and (c) > quite possibly implementable on relatively modest hardware. This class Define "relatively modest". > of systems is being ignored for sociology-of-science reasons (the > current generation of AI researchers would have to abandon their deepest > loves to be able to embrace such systems, and since they are fallible > humans, rather than objectively perfect scientists, this is anathema). Which is why blind optimization processes running on acres of hardware will kick their furry little butts. > So, my most general answer to this question about the rate of the > intelligence explosion is that, in fact, it depends crucially on the > kind of AGI systems being considered. If the scope is restricted to the > current approaches, we might never actually reach human level > intelligence, and the questio is moot. > > But if this other class of (complex) AGI systems did start being built, > we might find that the hardware requirements were relatively modest > (much less than supercomputer size), and the software complexity would > also not be that great. As far as I can see, most of the I love this "software" thing. > above-mentioned limitations would not be significant within the first > few orders of magnitude of increase. And, the beginning of the slope > could be in the relatively near future, rather than decades away. In order to have progress, you first have to have people working on it. > But that, as usual, is just the opinion of an AGI researcher. No need > to take *that* into account in assessing the factors. ;-) Speaking of AGI researchers: do you have a nice publication track of yours you could dump here? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Jan 20 21:41:13 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 20 Jan 2011 22:41:13 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> Message-ID: <20110120214113.GN23560@leitl.org> On Thu, Jan 20, 2011 at 08:25:14PM +0000, BillK wrote: > These two remarks strike me as being very significant. > > *How* does it understand its own design? > > Suppose the AGI has a new design for subroutine 453741. Suppose you have a new design for a cortical column. > How does it know whether it is *better*? Probably, by building a critter that incorporates it, and testing its performance. Wait, how is this different from what you're doing right now? Sure, your generations are decades, not ms, but it's all the same thing. > First step is: How does the AGI make it bug-free? The cats get the slow mice. The fast mice escape the slow cats. > Then - Is it better in some circumstances and not others? Does it Ah, but if you encounter the wrong circumstances, you will be outperformed. > lessen the benefits of some other algorithms? Does it completely > block some avenues of further design? The slow mice still get born. They don't get very old, though. > Should this improvement be implemented before or after other changes? You're thinking like a rational human. The process is not rational, nor is it human. It is not even sentient. > And then the AGI is trying to do this for hundreds of thousands of routines. Millions of generations. Populations sizes of giga to tera. > > To say that the AGI knows best and will solve all these problems is > circular logic. > First make an AGI, but you need an AGI to solve these problems........ Backtrace the chain of event starting with you reading this message. Way back. All the way back. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Thu Jan 20 21:45:57 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 20 Jan 2011 16:45:57 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> Message-ID: <4D38AD15.5050505@lightlink.com> BillK wrote: > On Thu, Jan 20, 2011 at 7:27 PM, Richard Loosemore wrote: > >> A) Although we can try our best to understand how an intelligence >> explosion might happen, the truth is that there are too many interactions >> between the factors for any kind of reliable conclusion to be reached. This >> is a complex-system interaction in which even the tiniest, least-anticipated >> factor may turn out to be the rate-limiting step (or, conversely, the spark >> that starts the fire). >> >> C) There is one absolute prerequisite for an intelligence explosion, >> and that is that an AGI becomes smart enough to understand its own >> design. If it can't do that, there is no explosion, just growth as usual. >> I do not believe it makes sense to talk about what happens *before* that >> point as part of the "intelligence explosion". >> > > > These two remarks strike me as being very significant. > > *How* does it understand its own design? > > Suppose the AGI has a new design for subroutine 453741. > How does it know whether it is *better*? > First step is: How does the AGI make it bug-free? > Then - Is it better in some circumstances and not others? Does it > lessen the benefits of some other algorithms? Does it completely > block some avenues of further design? > Should this improvement be implemented before or after other changes? Your question is about understanding how to boost the functionality (IQ) rather than clock speed. So, first response: it can get more bang for the buck by simply becoming an expert in how to design better electronic circuits. (In that respect, my point C above was badly worded: the AGI needs to be human-level intelligent, so that at the very least it can build a faster computing substrate for its own mind. At a minimum it should also understand *enough* about its own design to make sensible choices about what low-level electronics would be good to speed up, with better hardware.) But your point is about how it would augment its own intelligence mechanisms, not just its hardware. The nature of your question -- about improving a particular algorithm -- presupposes a certain kind of AGI design in the first place: on ein which a lot hinged on the design of one particular algorithm. In my approach to AGI, by contrast, there is a swarm of algorithms, none of which is critical to performance (the system is designed to degrade gracefully), so improvements are done gradually, and are the result of empirical investigations. I would not see any of your above list of questions as being a problem, either for human engineers or for the AGI itself. > And then the AGI is trying to do this for hundreds of thousands of routines. > > > To say that the AGI knows best and will solve all these problems is > circular logic. > First make an AGI, but you need an AGI to solve these problems........ I see nothing circular. You may have to explain. At least, I can see an entire class of systems in which it is very much NOT circular. Richard Loosemore From alfio.puglisi at gmail.com Thu Jan 20 22:01:25 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 20 Jan 2011 23:01:25 +0100 Subject: [ExI] mass transit again In-Reply-To: <0728a64aba406775f6ca2542bf841af7.squirrel@www.main.nc.us> References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> <20110120111639.GF23560@leitl.org> <004201cbb8c9$a182c3f0$e4884bd0$@att.net> <0728a64aba406775f6ca2542bf841af7.squirrel@www.main.nc.us> Message-ID: On Thu, Jan 20, 2011 at 9:53 PM, MB wrote: > > > > > If you put electric trains where they are needed, they will be used. In > my > > city a light rail electric line opened some months ago, after an enormous > > amount of debates and newspaper articles about its uselessness and how it > > would always be empty. It has now about 1 million riders / month, which > is > > an average of 86 per journey, with a train every five minutes. I believe > > that this number is understated, since when I have taken it, it was > always > > packed, no matter what time it was (here's an image: > > http://commons.wikimedia.org/wiki/File:Test_of_tramway_of_Florence_2.png) > > > > That is very nice! It runs smoothly and does not shake the cathedral and > other > lovely old buildings? > It actually causes much less vibrations than the buses it replaces, and the ride inside is very smooth. But it stops at the railway station, and does not get to the cathedral. That's the job for line #2, assuming the original design survives the umpteenth wave of criticism. Running every 5 minutes sounds useful. Question: if one is coming in from > the > countryside, where can one safely park an automobile in order to use the > train *in* > town? Years ago I took the train to work, parking near the station in the > small > town and walking from the station in the city to the office. > The car parking at the tramway terminal is still under construction. It will be integrated with the A1 exit of Scandicci. I have no idea of how long it will take to complete it, in Italy these things can drag on for a long time. Apart from that, the tramway was designed for use by the commuters living in the south-west part of Florence, and does not have good car parkings along the route, except at the terminal in front of the main railway station. The route does intersect the in-town part of Fi-Pi-Li (see the map: http://it.wikipedia.org/wiki/File:Mappa_tram_Firenze.png ), but then you are already too close to the center to have hope of finding a decent parking spot. Taking the train, if possible, is still the best option. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Jan 20 22:04:01 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 20 Jan 2011 17:04:01 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <20110120213150.GL23560@leitl.org> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110120213150.GL23560@leitl.org> Message-ID: <4D38B151.8090804@lightlink.com> Eugen Leitl wrote: > On Thu, Jan 20, 2011 at 02:27:36PM -0500, Richard Loosemore wrote: > >> C) There is one absolute prerequisite for an intelligence explosion, >> and that is that an AGI becomes smart enough to understand its own >> design. If it can't do that, there is no explosion, just growth as > > Unnecessary for darwinian systems. The process is dumb as dirt, > but it's working quite well. By even uttering the phrase "darwinian systems" you introduce a raft of assumptions that all have huge implications. This implies a process in which the entire AGI is improving as a result of it being introduced into a big ecosystem of other AGIs, all exposed to a shaping environment, and with test-and-improve mechanisms akin to those that operate in nature, to evolve biological systems. That is a GIGANTIC set of assumptions. At the very least the AGIs have to be made in very large numbers, to get meaningful competition and fitness pressures ..... so if each is initially the size of a supercomputer, this process of improvement won't even start until we have the resources to build thousands or millions of AGI supercomputers! And, also, the fitness function has to do..... what, exactly? Make them compete for mates, according to the size of their computational muscles? Obviously a stupid idea, but if not that, then what? And when do they get killed off, in this evolutionary competition? Are they being bred for their logical abilities, their compassion, their sensitivity...? And assuming that they do not start with fantastically greater-than-human thought speed, does this AGI evolutionary process require them to live for an entire human maturation period before they have babies? None of this makes any sense. Somehow, the idea that genetic algorithms can improve their fitness has exploded out of control and become the idea that AGIs can improve by an evolutionary mechanism, but without any meaningful answers to these critical questions about the evolutionary process ON THAT LARGER SCALE. What works for a GA -- the millisecond generation turnaround you mention below -- is completely nuts for a full-blown AGI. Let me state this clearly. If an AGI can engage in meaningful interaction with the world, sufficient for something to evaluate its performance and decide on its "fitness", in the space of one millisecond, then you already have a superintelligent AGI! It would probably already be operating at a million times human speed (and having some pretty weird interactions with the universe, at that speed), if we presume it to be capable of having its fitness judged after one millisecond. I hereby declare this whole "build an AGI by darwinian evolution" idea to be so logically incoherent that it does not need to detain us any longer. Richard Loosemore >> usual. I do not believe it makes sense to talk about what happens > > If you define the fitness function, and have ~ms generation > turaround it's not quite as usual anymore. > >> *before* that point as part of the "intelligence explosion". >> >> D) When such a self-understanding system is built, it is unlikely that > > I don't think that a self-understanding system is at all possible. > Or, rather, it would perform better than a blind optimization. > >> it will be the creation of a lone inventor who does it in their shed at >> the bottom of the garden, without telling anyone. Very few of the "lone >> inventor" scenarios (the Bruce Wayne scenarios) are plausible. > > I agree it's probably a large scale effort, initially. > >> E) Most importantly, the invention of a human-level, self-understanding > > I wonder where the self-understanding meme is coming from. It's > certainly pervasive enough. > >> AGI would not lead to a *subsequent* period (we can call it the >> "explosion period") in which the invention just sits on a shelf with >> nobody bothering to pick it up. A situation in which it is just one >> quiet invention alongside thousands of others, unrecognized and not >> generally believed. >> >> F) When the first human-level AGI is developed, it will either require >> a supercomputer-level of hardware resources, or it will be achievable > > Bootstrap takes many orders of mangnitude more resources than required > for operation. Even before optimization happens. > >> with much less. This is significant, because world-class supercomputer >> hardware is not something that can quickly be duplicated on a large >> scale. We could make perhaps hundreds of such machines, with a massive > > About 30 years from now, TBit/s photonic networking is the norm. > The separation between core and edge is gone, and inshallah, so > well policy enforcement. Every city block is a supercomputer, then. > >> effort, but probably not a million of them in a couple of years. > > There are a lot of very large datacenters with excellent network > cross-section, even if you disregard large screen TVs and game > consoles on >GBit/s residential networks. > >> G) There are two types of intelligence speedup: one due to faster >> operation of an intelligent system (clock speed) and one due to > > Clocks don't scale, eventually you'll settle for local asynchronous, > with large-scale loosely coupled oscillators synchronizing. > >> improvment in the type of mechanisms that implement the thought >> processes. Obviously both could occur at once, but the latter is far > > How much random biochemistry tweaking would improve dramatically > on the current CNS performance? As a good guess, none. So once you've > reimplemented the near-optimal substrate, dramatic improvements > are over. This isn't software, this is direct implmenentation of > neural computational substrate in as thin hardware layer this > universe allows us. > >> more difficult to achieve, and may be subject to fundamental limits that >> we do not understand. Speeding up the hardware, on the other hand, has > > I disagree, the limits are that of computational physics, and these are > fundamentally simple. > >> been going on for a long time and is more mundane and reliable. Notice >> that both routes lead to greater "intelligence", because even a human >> level of thinking and creativity would be more effective if it were >> happening (say) a thousand times faster than it does now. > > Run a dog for a gigayear, still no general relativity. > >> ********************************************* >> >> Now the specific factors you list. >> >> 1) Economic growth rate >> >> One consequence of the above reasoning is that economic growth rate >> would be irrelevant. If an AGI were that smart, it would already be > > Any technology allowing you to keep a mind in a box will allow you > to make a pretty good general assembler. The limits of such technology > are energy and matter fluxes. Buying and shipping widgets is only a > constraining factor in the physical layer bootstrap (if at all necessary, > 30 years hence all-purpose fabrication has a pretty small footprint). > >> obvious to many that this was a critically important technology, and no >> effort would be spared to improve the AGI "before the other side does". >> Entire national economies would be sublimated to the goal of developing >> the first superintelligent machine. > > This would be fun to watch. > >> In fact, economic growth rate would be *defined* by the intelligence >> explosion projects taking place around the world. >> >> >> 2) Investment availability >> >> The above reasoning also applies to this case. Investment would be >> irrelevant because the players would either be governments or frenzied >> bubble-investors, and they would be pumping it in as fast as money could >> be printed. >> >> >> 3) Gathering of empirical information (experimentation, interacting with >> an environment). >> >> So, this is about the fact that the AGI would need to do some >> experimentation and interaction with the environment. For example, if > > If you have enough crunch to run a mind, you have enough crunch to > run really really really good really fast models of the universe. > >> it wanted to reimplement itself on faster hardware (the quickest route >> to an intelligence increase) it would probably have to set up its own >> hardware research laboratory and gather new scientific data by doing >> experiments, some of which would go at their own speed. > > You're thinking like a human. > >> The question is: how much of the research can be sped up by throwing >> large amounts of intelligence at it? This is the parallel-vs-serial >> problem (i.e. you can't make a baby nine times quicker by asking nine >> women to be pregnant for one month). > > It's a good question. I have a hunch (no proof, nothing) that the > current way of doing reality modelling is extremely inefficient. > Currently, experimenters have every reason to sneer at modelers. > Currently. > >> This is not a factor that I believe we can understand very well ahead of >> time, because some experiments that look as though they require >> fundamentally slow physical processes -- like waiting for a silicon >> crystal to grow, so we can study a chip fabrication mechanism -- may >> actually be dependent on smartness, in ways that we cannot anticipate. >> It could be that instead of waiting for the chips to grow at their own >> speed, the AGI can do clever micro-experiments that give the same >> information faster. > > Any intelligence worth its salt would see that it would use computational > chemistry to bootstrap molecular manufacturing. The grapes could be hanging > pretty low there. > >> This factor invites unbridled speculation and opinion, to such an extent >> that there are more opinions than facts. However, we can make one >> observation that cuts through the arguments. Of all the factors that >> determine how fast empirical scientific research can be carried out, we >> know that intelligence and thinking speed of the scientist themselves >> *must* be one of the most important, today. It seems likely that in our >> present state of technological sophistication, advanced research >> projects are limited by the availability and cost of intelligent and >> experienced scientists. > > You can also vastly speed up the rate of prototyping by scaling down > and proper tooling. You see first hints of that in lab automation, > particularly microfluidics. Add ability to fork off dedicated investigators > at the drop of a hat, and things start happening, and in a positive-feedback > loop. > >> But if research labs around the world have stopped throwing *more* >> scientists at problems they want to solve, because the latter cannot be >> had, or are too expensive, would it be likely that the same research >> labs ar *also*, quite independently, at the limit for the physical rate >> at which experiments can be carried out? It seems very unlikely that >> both of these limits have been reached at the same time, because they >> cannot be independently maximized. (This is consistent with anecdotal >> reports: companies complain that research staff cost a lot, and that >> scientists are in short supply: they don't complain that nature is just >> too slow). > > Most monkeys rarely complain that they're monkeys. (Resident monkeys > excluded, of course). > >> In that case, we should expect that any experiment-speed limits lie up >> the road, out of sight. We have not reached them yet. > > I, a mere monkey, can easily imagine two orders of magnitude speed > improvements. Which, of course, result in a positive autofeedback loop. > >> So, for that reason, we cannot speculate about exactly where those >> limits are. (And, to reiterate: we are talking about the limits that >> hit us when we can no longer do an end-run around slow experiments by > > I do not think you will need slow experiments. Not slow by our standards, > at least. > >> using our wits to invent different, quicker experiments that give the >> same information). >> >> Overall, I think that we do not have concrete reasons to believe that >> this will be a fundamental limit that stops the intelligence explosion >> from taking an AGI from H to (say) 1,000 H. Increases in speed within >> that range (for computer hardware, for example) are already expected, >> even without large numbers of AGI systems helping out, so it would seem >> to me that physical limits, by themselves, would not stop an explosion >> that went from I = H to I = 1,000 H. > > Speed limits (assuming classical computation) do not begin to take hold > before 10^6, and maybe even 10^9 (this is more difficult, and I do not > have a good model of wetware at 10^9 speedup to current wallclock). > >> 4) Software complexity >> >> By this I assume you mean the complexity of the software that an AGI >> must develop in order to explode its intelligence. The premise is >> that even an AGI with self-knowledge finds it hard to cope with the >> fabulous complexity of the problem of improving its own software. > > Software, that's pretty steampunk of you. > >> This seems implausible as a limiting factor, because the AGI could >> always leave the software alone and develop faster hardware. So long as > > There is no difference between hardware and software (state) as far > as advanced cognition is concerned. Once you've covered the easy > gains in first giant co-evolution steps further increases are much > more modest, and much more expensive. > >> the AGI can find a substrate that gives it (say) 1,000 H thinking-speed, > > We should be able to do 10^3 with current technology. > >> we have the possibility for a significant intelligence explosion. > > Yeah, verily. > >> Arguing that software complexity will stop the initial human level AGI > > If it hurts, stop doing it. > >> from being built is a different matter. It may stop an intelligence >> explosion from happening by stopping the precursor events, but I take >> that to be a different type of question. >> >> >> 5) Hardware demands vs. available hardvare >> >> I have already mentioned, above, that a lot depends on whether the first >> AGI requires a large (world-class) supercomputer, or whether it can be >> done on something much smaller. > > Current supercomputers are basically consumer devices or embeddeds on > steroids, networked on a large scale. > >> This may limit the initial speed of the explosion, because one of the >> critical factors would be the sheer number of copies of the AGI that can > > Unless the next 30 years won't see the same development as the last ones, > then substrate is the least of your worries. > >> be created. Why is this a critical factor? Because the ability to copy >> the intelligence of a fully developed, experienced AGI is one of the big >> new factors that makes the intelligence explosion what it is: you >> cannot do this for humans, so human geniuses have to be rebuilt from >> scratch every generation. >> >> So, the initial requirement that an AGI be a supercomputer would make it >> hard to replicate the AGI on a huge scale, because the replication rate >> would (mostly) determine the intelligence-production rate. > > Nope. > >> However, as time went on, the rate of replication would grow, as > > Look, even now we know what we would need, but you can't buy it. But > you can design it, and two weeks from now you'll get your first prototypes. > That's today, 30 years the prototypes might be hours away. > > And do you need prototypes to produce a minor variation on a stock > design? Probably not. > >> hardware costs went down at their usual rate. This would mean that the >> *rate* of arrival of high-grade intelligence would increase in the years >> following the start of this process. That intelligence would then be >> used to improve the design of the AGIs (at the very least, increasing >> the rate of new-and-faster-hardware production), which would have a >> positive feedback effect on the intelligence production rate. >> >> So I would see a large-hardware requirement for the first AGI as >> something that would dampen the initial stages of the explosion. But > > Au contraire, this planet is made from swiss cheese. Annex at your leisure. > >> the positive feedback after that would eventually lead to an explosion >> anyway. >> >> If, on the other hand, the initial hardware requirements are modest (as >> they very well could be), the explosion would come out of the gate at >> full speed. >> >> >> >> >> 6) Bandwidth >> >> Alongside the aforementioned replication of adult AGIs, which would >> allow the multiplication of knowledge in ways not currently available in >> humans, there is also the fact that AGIs could communicate with one >> another using high-bandwidth channels. This inter-AGI bandwidth. > > Fiber is cheap. Current fiber comes in 40 or 100 GBit/s parcels. > 30 years hence bandwidth will be probably adequate. > >> As a separate issue, there might be bandwidth limits inside an AGI, >> which might make it difficult to augment the intelligence of a single >> system. This is intra-AGI bandwidth. > > Even now bandwidth growth is far in excess of computation growth. > Once you go embedded memory, you're more closely matched. But still > the volume/surface (you only have to communicate surface state) > ratio indicated the local communication is the bottleneck. > >> The first one - inter-AGI bandwidth - is probably less of an issue for >> the intelligence explosion, because there are so many research issues >> that can be split into separably-addressible components, that I doubt we >> would find AGIs sitting around with no work to do on the intelligence >> amplification project, on account of waiting for other AGIs to get a >> free channel to talk to them. > > You're making it sound so planned, and orderly. > >> Intra-AGI bandwidth is another matter entirely. There could be >> limitations on the IQ of an AGI -- for example if working memory >> limitations (the magic number seven, plus or minus two) turned out to be >> caused by connectivity/bandwidth limits within the system. > > So many assumptions. > >> However, notice that such factors may not inhibit the initial phase of >> an explosion, because the clock speed, not IQ, of the AGI may be > > There is no clock, literally. Operations/volume, certainly. > >> improvable by several orders of magnitude before bandwidth limits kick >> in. The reasoning behind this is the observation that neural signal > > Volume/surface ratio is on your side here. > >> speed is so slow. If a brain-like system (not necessarily a whole brain >> emulation, but just something that replicated the high-level >> functionality) could be built using components that kept the same type >> of processing demands, and the same signal speed. In that kind of >> system there would then be plenty of room to develop faster signal >> speeds and increase the intelligence of the system. >> >> Overall, this is, I believe, the factor that is most likely to cause >> trouble. However, much research is needed before much can be said with >> certainty. >> >> Most importantly, this depends on *exactly* what type of AGI is being >> built. Making naive assumptions about the design can lead to false >> conclusions. > > Just think of it as a realtime simulation of a given 3d physical > process (higher dimensions are mapped to 3d, so they don't figure). > Suddenly things are simple. > >> >> 7) Lightspeed lags >> >> This is not much different than bandwidth limits, in terms of the effect >> it has. It would be a significant problem if the components of the >> machine were physically so far apart that massive amounts of data (by >> assumption) were delivered with a significant delay. > > Vacuum or glass is a FIFO, and you don't have to wait for ACKs. > Just fire stuff bidirectionally, and deal with transmission errors > by graceful degradation. > >> By itself, again, this seems unlikely to be a problem in the initial few >> orders of magnitude of the explosion. Again, the argument derives from >> what we know about the brain. We know that the brain's hardware was >> chosen due to biochemical constraints. We are carbon-based, not >> silicon-and-copper-based, so, no chips in the head, only pipes filled >> with fluid and slow molecular gates in the walls of the pipes. But if >> nature used the pipes-and-ion-channels approach, there seems to be >> plenty of scope for speedup with a transition to silicon and copper (and >> never mind all the other more exotic computing substrates on the >> horizon). If that transition produced a 1,000x speedup, this would be >> an explosion worthy of the name. > > Why so modest? > >> The only reason this might not happen would be if, for some reason, the >> brain is limited on two fronts simultaneously: both by the carbon >> implementation and by the fact that bigger brains cause disruptive > > The brain is a slow, noisy (but one using noise to its own advantage) > metabolically constrained system which burns most of its metabolism > for homeostasis purposes. It doesn't take a genius to sketch the > obvious ways in which you can reimplement that design, taking advantages > and removing disadvantages. > >> light-speed delays. Or, that all non-carbon-implementation of the brain >> take us up close to the lightspeed limit before we get much of a speedup > > We here work with ~120 m/s, not 120 Mm/s. Reduce feature size by > an order of magnitude or two, and switching times of ns and ps > instead of ms, and c is not that big a limitation anymore. > >> over the brain. Neither of these ideas seem plausible. In fact, they >> both seem to me to require a coincidence of limiting factors (two >> limiting factors just happening to kick in at exactly the same level), >> which I find deeply implausible. >> >> >> ***************** >> >> Finally, some comments about approaches to AGI that would affect the >> answer to this question about the limiting factors for an intelligence >> explosion. >> >> I have argued consistently, over the last several years, that AI >> research has boxed itself into a corner due to a philosophical >> commitment to the power of formal systems. Since I first started > > Very much so. > >> arguing this case, Nassim Nicholas Taleb (The Black Swan) coined the >> term "Ludic Fallacy" to describe a general form of exactly the issue I >> have been describing. >> >> I have framed this in the context of something that I called the >> "complex systems problem", the details of which are not important here, >> although the conclusion is highly relevant. >> >> If the complex systems problem is real, then there is a very large class >> of AGI system designs that are (a) almost completely ignored at the >> moment, and (b) very likely to contain true intelligent systems, and (c) >> quite possibly implementable on relatively modest hardware. This class > > Define "relatively modest". > >> of systems is being ignored for sociology-of-science reasons (the >> current generation of AI researchers would have to abandon their deepest >> loves to be able to embrace such systems, and since they are fallible >> humans, rather than objectively perfect scientists, this is anathema). > > Which is why blind optimization processes running on acres of > hardware will kick their furry little butts. > >> So, my most general answer to this question about the rate of the >> intelligence explosion is that, in fact, it depends crucially on the >> kind of AGI systems being considered. If the scope is restricted to the >> current approaches, we might never actually reach human level >> intelligence, and the questio is moot. >> >> But if this other class of (complex) AGI systems did start being built, >> we might find that the hardware requirements were relatively modest >> (much less than supercomputer size), and the software complexity would >> also not be that great. As far as I can see, most of the > > I love this "software" thing. > >> above-mentioned limitations would not be significant within the first >> few orders of magnitude of increase. And, the beginning of the slope >> could be in the relatively near future, rather than decades away. > > In order to have progress, you first have to have people working on it. > >> But that, as usual, is just the opinion of an AGI researcher. No need >> to take *that* into account in assessing the factors. ;-) > > Speaking of AGI researchers: do you have a nice publication track of > yours you could dump here? > From spike66 at att.net Thu Jan 20 22:18:00 2011 From: spike66 at att.net (spike) Date: Thu, 20 Jan 2011 14:18:00 -0800 Subject: [ExI] smorebrod In-Reply-To: <20110120141537.qqeulbd2scow8g84@webmail.natasha.cc> References: <004901cbb8cc$9c500480$d4f00d80$@att.net> <20110120141209.lkqowue0w04sco4k@webmail.natasha.cc> <20110120141537.qqeulbd2scow8g84@webmail.natasha.cc> Message-ID: <006701cbb8ef$e5d66a40$b1833ec0$@att.net> >... On Behalf Of natasha at natasha.cc Subject: Re: [ExI] smorebrod >> I was at the manson for a party and Hefner... Natasha! We never knew ye! You were one of Hef's bunnies? Cool! >oops. Mansion. >But I suppose Spike would say that the manson was okay since the mans on top of it. {8^D With a funny sense of humor too! I missed the manson, and the excellent opportunity for wordplay. Thanks for a good laugh. {8^D spike From natasha at natasha.cc Thu Jan 20 22:36:19 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 20 Jan 2011 17:36:19 -0500 Subject: [ExI] smorebrod In-Reply-To: <006701cbb8ef$e5d66a40$b1833ec0$@att.net> References: <004901cbb8cc$9c500480$d4f00d80$@att.net> <20110120141209.lkqowue0w04sco4k@webmail.natasha.cc> <20110120141537.qqeulbd2scow8g84@webmail.natasha.cc> <006701cbb8ef$e5d66a40$b1833ec0$@att.net> Message-ID: <20110120173619.ikuzskdnkgscckww@webmail.natasha.cc> Quoting spike : >> ... On Behalf Of natasha at natasha.cc > Subject: Re: [ExI] smorebrod > >>> I was at the manson for a party and Hefner... > > Natasha! We never knew ye! You were one of Hef's bunnies? Cool! No! At my age?! I was just handing with friends, my dear. >> oops. Mansion. > >> But I suppose Spike would say that the manson was okay since the mans on > top of it. > > {8^D With a funny sense of humor too! > > I missed the manson, and the excellent opportunity for wordplay. Thanks for > a good laugh. {8^D U2 N > > spike > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From max at maxmore.com Thu Jan 20 23:34:31 2011 From: max at maxmore.com (Max More) Date: Thu, 20 Jan 2011 16:34:31 -0700 Subject: [ExI] mass transit again In-Reply-To: References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> <20110120111639.GF23560@leitl.org> <004201cbb8c9$a182c3f0$e4884bd0$@att.net> Message-ID: My problem with public transit is that I'm definitely not someone who travels light. Even just going to work daily (now that I commute), I carry a laptop, a heavy bag of... stuff, and often a third bag. If I need to go grocery shopping (which I do several times per week to keep fresh fruit and veggies in stock), my load is even greater. Public transit doesn't solve that problem. It's also less than ideal if you have to carry lots of things in locations that have either lots of rain or high temperatures. (I moved from Austin, TX, which was often in the 90s with significant humidty, to Phoenix, AZ which frequently warms to over 100 degrees, sometimes much higher.) What about some clever combination of public and private transport? I vaguely remember a design (from MIT?) for a system that would let you drive your own car onto a palette, which would them be transported with other cars on an elevated rail-like system to major destinations, after which you simply drive away to your specific destination. Max From hkeithhenson at gmail.com Fri Jan 21 00:07:51 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 20 Jan 2011 17:07:51 -0700 Subject: [ExI] Limiting factors of intelligence explosion speeds Message-ID: On Thu, Jan 20, 2011 at 5:00 AM, Anders Sandberg wrote: snip > Some factors that have been mentioned in past discussions: > ? ?Economic growth rate > ? ?Investment availability > ? ?Gathering of empirical information (experimentation, interacting > with an environment) > ? ?Software complexity > ? ?Hardware demands vs. available hardvare > ? ?Bandwidth > ? ?Lightspeed lags > > Clearly many more can be suggested. But which bottlenecks are the most > limiting, and how can this be ascertained? There is a historical example, almost 10 years old now, that combines several of these considerations. Go to slide #13 here: http://www.slidefinder.net/w/worms_adapted_vitaly_shmatikov_austin/24249603 An initial doubling time of 8.5 seconds was mentioned. "Slammer" more or less took down the Internet in 30 minutes. Keith PS The paranoid among us might want to consider the possibility that runaway machine intelligence has already happened. (Ghod knows what goes on in unused cloud computing capacity.) The question is how we might recognize it? Would things start happen? What? Anything? Would machine intelligence stay where it was unnoticed? From rpwl at lightlink.com Fri Jan 21 00:26:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 20 Jan 2011 19:26:30 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: References: Message-ID: <4D38D2B6.7050600@lightlink.com> Keith Henson wrote: > On Thu, Jan 20, 2011 at 5:00 AM, Anders Sandberg wrote: > > snip > >> Some factors that have been mentioned in past discussions: >> Economic growth rate >> Investment availability >> Gathering of empirical information (experimentation, interacting >> with an environment) >> Software complexity >> Hardware demands vs. available hardvare >> Bandwidth >> Lightspeed lags >> >> Clearly many more can be suggested. But which bottlenecks are the most >> limiting, and how can this be ascertained? > > There is a historical example, almost 10 years old now, that combines > several of these considerations. > > Go to slide #13 here: > > http://www.slidefinder.net/w/worms_adapted_vitaly_shmatikov_austin/24249603 > > An initial doubling time of 8.5 seconds was mentioned. "Slammer" more > or less took down the Internet in 30 minutes. > > Keith > > PS > > The paranoid among us might want to consider the possibility that > runaway machine intelligence has already happened. (Ghod knows what > goes on in unused cloud computing capacity.) The question is how we > might recognize it? Would things start happen? What? Anything? > Would machine intelligence stay where it was unnoticed? Back in the day, when the telephone system was just beginning to grow, some people worried that THAT might wake up and become sentient. I think you should be worried about all those telephone wires, you know. Richard Loosemore From spike66 at att.net Fri Jan 21 01:23:15 2011 From: spike66 at att.net (spike) Date: Thu, 20 Jan 2011 17:23:15 -0800 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <20110120213150.GL23560@leitl.org> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110120213150.GL23560@leitl.org> Message-ID: <007c01cbb909$c792c500$56b84f00$@att.net> ... On Behalf Of Eugen Leitl ... >> ... human level of thinking and creativity would be more effective if it were happening (say) a thousand times faster than it does now. >...Run a dog for a gigayear, still no general relativity... Eugen Hmmm that's a strong claim. The common ancestor of humans and dogs are less than a tenth of a gigayear back. That was enough time to evolve both modern dogs and beasts capable of discovering general relativity. If you meant running a dog under conditions that disallow genetic drift, that might have missed the point of speeding up the sim. spike From max at maxmore.com Fri Jan 21 00:25:44 2011 From: max at maxmore.com (Max More) Date: Thu, 20 Jan 2011 17:25:44 -0700 Subject: [ExI] smorebrod In-Reply-To: <20110120173619.ikuzskdnkgscckww@webmail.natasha.cc> References: <004901cbb8cc$9c500480$d4f00d80$@att.net> <20110120141209.lkqowue0w04sco4k@webmail.natasha.cc> <20110120141537.qqeulbd2scow8g84@webmail.natasha.cc> <006701cbb8ef$e5d66a40$b1833ec0$@att.net> <20110120173619.ikuzskdnkgscckww@webmail.natasha.cc> Message-ID: On Thu, Jan 20, 2011 at 3:36 PM, wrote: > Quoting spike : > >>> ... On Behalf Of natasha at natasha.cc >> >> Subject: Re: [ExI] smorebrod >> >>>> I was at the manson for a party and Hefner... >> >> Natasha! ?We never knew ye! ?You were one of Hef's bunnies? ?Cool! > > No! At my age?! I was just handing with friends, my dear. Really? Who were you "handing", and how come you never mentioned this before? %-) Max -- Max More Strategic Philosopher Co-founder, Extropy Institute CEO, Alcor Life Extension Foundation 7895 E. Acoma Dr # 110 Scottsdale, AZ 85260 877/462-5267 ext 113 From rpwl at lightlink.com Fri Jan 21 02:21:26 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 20 Jan 2011 21:21:26 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <007c01cbb909$c792c500$56b84f00$@att.net> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110120213150.GL23560@leitl.org> <007c01cbb909$c792c500$56b84f00$@att.net> Message-ID: <4D38EDA6.3030401@lightlink.com> spike wrote: > ... On Behalf Of Eugen Leitl > ... > >>> ... human level of thinking and creativity would be more effective if it > were happening (say) a thousand times faster than it does now. > >> ...Run a dog for a gigayear, still no general relativity... Eugen > > Hmmm that's a strong claim. The common ancestor of humans and dogs are less > than a tenth of a gigayear back. That was enough time to evolve both modern > dogs and beasts capable of discovering general relativity. > > If you meant running a dog under conditions that disallow genetic drift, > that might have missed the point of speeding up the sim. Eugen's comment -- "Run a dog for a gigayear, still no general relativity" -- was content-free from the outset. Whoever talked about building a dog-level AGI? If a community of human-level AGIs were available today, and were able to do a thousand years of research in one year, that would advance our level of knowledge by a thousand years, between now and January 20th next year. The whole point of having an intelligence explosion is to make that rate of progress possible. What has that got to do with running a dog simulation for a billion years? Richard Loosemore From spike66 at att.net Fri Jan 21 02:17:54 2011 From: spike66 at att.net (spike) Date: Thu, 20 Jan 2011 18:17:54 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> References: <485988.6615.qm@web114416.mail.gq1.yahoo.com> <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> Message-ID: <008b01cbb911$69c96520$3d5c2f60$@att.net> >>> "spike" wrote: > >>> ... I know an inspiring story based on something that >>> actually happened, which I could fictionalize to protect the >>> identities, and it involves one who came thru a very trying time by >>> faith in god. It really is a good story. But you know and I know I >>> am a flaming atheist now... Is it ethical for me to write it? spike > >> Of course you wouldn't be lying, not if you know it's a true story. >> As for whether you *should* write it, that's another thing. There are pros and cons. One of the cons is providing fuel for the god-squad. Ben >Why would it be unethical to admit the truth that belief in god, or at least some applications thereof, can make it easier to get through at least some types of very challenging times. That is pretty well known. Doesn't mean god is real or that religion is a more good thing than not or anything like that. So how would relating such a story in any wise be wrong or a form of lying? - samantha This whole question is filled with maddening paradox. >It's clearly unethical to write something you know are untrue, or tell someone a lie, just to give them comfort... But of course this is fiction story, a novel. >So you should consider the ramifications of your story, will it be a story that makes people understand the world, reality and society better? Depends on how I write it. But is the end goal to make people understand better? Need there be an end goal? > - or is it just a story that will further fuel an addiction to whatever fantasy a person have created in their minds? - Sondre Depends on how I write it. I can sharpen the question, but first I must define how I am using the term fundamentalist believer. A fundamentalist is one who treats religious theory as equivalent to any other scientific theory. This works for fundamentalists of any religion. The scientist will say every hafnium atom has exactly 72 protons, not 71, not 73, exactly 72. If the scientist ever discovered a form of hafnium with 71 or 73 protons, the entire theory is in deep trouble. Likewise every form of fundamentalist religion is adjacent to atheism. For the fundamentalist, religion is not just a folklore than forms the basis for society, or a framework on which to build ethics, rather it is equivalent to any scientific theory. If any tenet of that religion makes incorrect predictions, the theory is wrong, so out it goes. Fundamentalist believers and atheists have way more in common than either likes to admit. I have a secondary character who struggles for years to unify fundamentalist religion and science, specifically evolution. The poor chap is buried in evidence for evolution, he's just swamped by it. I have the choice of ending the story while having him still searching searching searching, a self-admitted lost soul, tortured by cognitive dissonance. Or I could have him eventually admit these two theories will never play well together, they are mutually exclusive, cannot be unified. He is forced against his will to reject his own favorite notion, and embrace that which he dreads, but can see is true. The latter is what actually happened to the character upon which the fictional one is based. The religion crowd would hate the story if I told the last part of it. But it is primarily for that crowd that the story would be written in the first place. I don't see how it would be right to disappoint them if they invest the time in reading the story. On the other hand, omitting the rest of the story feels dishonest to me. I could write it the story in two parts, with the rest of the story as a sequel. spike From natasha at natasha.cc Fri Jan 21 02:30:46 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 20 Jan 2011 21:30:46 -0500 Subject: [ExI] smorebrod In-Reply-To: References: <004901cbb8cc$9c500480$d4f00d80$@att.net> <20110120141209.lkqowue0w04sco4k@webmail.natasha.cc> <20110120141537.qqeulbd2scow8g84@webmail.natasha.cc> <006701cbb8ef$e5d66a40$b1833ec0$@att.net> <20110120173619.ikuzskdnkgscckww@webmail.natasha.cc> Message-ID: <20110120213046.qcwriplcg8o8gco0@webmail.natasha.cc> Quoting Max More : > On Thu, Jan 20, 2011 at 3:36 PM, wrote: >> Quoting spike : >> >>>> ... On Behalf Of natasha at natasha.cc >>> >>> Subject: Re: [ExI] smorebrod >>> >>>>> I was at the manson for a party and Hefner... >>> >>> Natasha! ?We never knew ye! ?You were one of Hef's bunnies? ?Cool! >> >> No! At my age?! I was just handing with friends, my dear. > > Really? Who were you "handing", and how come you never mentioned this before? > > %-) Sorry. It's that darn dyslexia and apraxia coctail in my head. Natasha From mail at harveynewstrom.com Fri Jan 21 04:35:30 2011 From: mail at harveynewstrom.com (mail at harveynewstrom.com) Date: Thu, 20 Jan 2011 21:35:30 -0700 Subject: [ExI] smorebrod Message-ID: <20110120213530.d32794d095cdfcc0018508d9c136b552.a4dd7a94bf.wbe@email09.secureserver.net> An HTML attachment was scrubbed... URL: From mail at harveynewstrom.com Fri Jan 21 05:01:22 2011 From: mail at harveynewstrom.com (mail at harveynewstrom.com) Date: Thu, 20 Jan 2011 22:01:22 -0700 Subject: [ExI] mass transit again Message-ID: <20110120220122.d32794d095cdfcc0018508d9c136b552.d969e76a73.wbe@email09.secureserver.net> That's just poor scheduling. If routes are running with one person on them, they should be cut to fewer times. But maybe you're just in a slow spot between other areas of heavy activity. A bus that is full for a mile compensates for times when it is empty for many miles. And every full bus compensates for many empty buses. It still can work out mathematically, with lots of empty buses and empty miles. -- Harvey Newstrom, Principal Security Architect CISSP CISA CISM CGEIT CSSLP CRISC CIFI NSA-IAM ISSAP ISSMP ISSPCS IBMCP From spike66 at att.net Fri Jan 21 06:56:57 2011 From: spike66 at att.net (spike) Date: Thu, 20 Jan 2011 22:56:57 -0800 Subject: [ExI] mass transit again In-Reply-To: <20110120220122.d32794d095cdfcc0018508d9c136b552.d969e76a73.wbe@email09.secureserver.net> References: <20110120220122.d32794d095cdfcc0018508d9c136b552.d969e76a73.wbe@email09.secureserver.net> Message-ID: <008f01cbb938$65860460$30920d20$@att.net> ... On Behalf Of mail at harveynewstrom.com Subject: Re: [ExI] mass transit again >...That's just poor scheduling. If routes are running with one person on them, they should be cut to fewer times. But maybe you're just in a slow spot between other areas of heavy activity. A bus that is full for a mile compensates for times when it is empty for many miles. And every full bus compensates for many empty buses. It still can work out mathematically, with lots of empty buses and empty miles. -- Harvey Newstrom Ja and I thought of something else too. It could be that one particular route is bad for having few riders. It goes to that big industrial park right beside Moffett Field in the Bay Area. Probably some of you have visited it, a huge industrial park with lots of jobs where people don't know for sure when they will get finished in the evening, so they want the convenience of a personal car. Yahoo headquarters is in there, Lockheed Martin, the usual collection of internet fly-by-nights, a bunch of other stuff. There is no reason to hang out anywhere near Moffett Park if one does not have a job there, and there are no low paying jobs in that area. So nearly everyone who has any reason to be there drives a car. So we have thousands of people who daily witness the absurdity of 300 cars backed up to allow a light rail car to pass with two persons aboard, one of whom is driving. That's how we get such an attitude. This is the only train I see on a regular basis. I can imagine there are other lines that are more heavily used. That one to Moffett should probably be retired, but the unions might not let them do it, lest they strike. A possible solution to the collective attitude is to paint human figures on the windows to make the trains appear full. spike From eugen at leitl.org Fri Jan 21 07:45:29 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 08:45:29 +0100 Subject: [ExI] mass transit again In-Reply-To: References: <006c01cbaedc$41a60ab0$c4f22010$@att.net> <815BD2CE-7FEC-4757-BD73-3767764F4DD9@mac.com> <20110120111639.GF23560@leitl.org> <004201cbb8c9$a182c3f0$e4884bd0$@att.net> Message-ID: <20110121074529.GT23560@leitl.org> On Thu, Jan 20, 2011 at 04:34:31PM -0700, Max More wrote: > What about some clever combination of public and private transport? I > vaguely remember a design (from MIT?) for a system that would let you > drive your own car onto a palette, which would them be transported > with other cars on an elevated rail-like system to major destinations, > after which you simply drive away to your specific destination. There are still several drive-on/drive-off trains in Europe, but they're mostly for long-distance (drive on, sleep overnight, drive off at your destination). I don't care much for those. I also tend to carry a bit in a backpack, but the commute (either by muni train or mountain bike) is ok for that. In principle you can use a light hand truck/trolley with the better kind of wheels. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Fri Jan 21 07:54:27 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 08:54:27 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: References: Message-ID: <20110121075427.GU23560@leitl.org> On Thu, Jan 20, 2011 at 05:07:51PM -0700, Keith Henson wrote: > The paranoid among us might want to consider the possibility that > runaway machine intelligence has already happened. (Ghod knows what It would be very easy to spot by sniffing traffic. In principle there's a close analogy between a synapse and a router, between packet and spike. Of course, in a bootstrap you would pack an entire spike train or equivalent payload in a packet, and nodes can be extremely fat. > goes on in unused cloud computing capacity.) The question is how we You can rent GPGPU node instances at Amazon now. > might recognize it? Would things start happen? What? Anything? > Would machine intelligence stay where it was unnoticed? The bootstrap is extremely messy, and absolutely impossible to miss. And why would you want to stay unnoticed? It's not like anyone can do anything about it. The world 30+ years from now is a lot different from today than 1980 was. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Fri Jan 21 07:56:10 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 08:56:10 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D38D2B6.7050600@lightlink.com> References: <4D38D2B6.7050600@lightlink.com> Message-ID: <20110121075610.GV23560@leitl.org> On Thu, Jan 20, 2011 at 07:26:30PM -0500, Richard Loosemore wrote: > Back in the day, when the telephone system was just beginning to grow, > some people worried that THAT might wake up and become sentient. Of course an exaflop cluster is exactly the same thing as an abacus. > I think you should be worried about all those telephone wires, you know. Logic is a pretty flower that smells bad. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Fri Jan 21 08:02:56 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 09:02:56 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <007c01cbb909$c792c500$56b84f00$@att.net> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110120213150.GL23560@leitl.org> <007c01cbb909$c792c500$56b84f00$@att.net> Message-ID: <20110121080256.GX23560@leitl.org> On Thu, Jan 20, 2011 at 05:23:15PM -0800, spike wrote: > > ... On Behalf Of Eugen Leitl > ... > > >> ... human level of thinking and creativity would be more effective if it > were happening (say) a thousand times faster than it does now. > > >...Run a dog for a gigayear, still no general relativity... Eugen > > Hmmm that's a strong claim. The common ancestor of humans and dogs are less > than a tenth of a gigayear back. That was enough time to evolve both modern > dogs and beasts capable of discovering general relativity. No evolution. Just your generic pack of Fidos for a gigayear. > If you meant running a dog under conditions that disallow genetic drift, > that might have missed the point of speeding up the sim. He said "... human level of thinking and creativity would be more effective if it were happening (say) a thousand times faster than it does now" which sounds as he excluded evolution. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sondre-list at bjellas.com Fri Jan 21 09:42:15 2011 From: sondre-list at bjellas.com (=?ISO-8859-1?Q?Sondre_Bjell=E5s?=) Date: Fri, 21 Jan 2011 10:42:15 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <008b01cbb911$69c96520$3d5c2f60$@att.net> References: <485988.6615.qm@web114416.mail.gq1.yahoo.com> <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> <008b01cbb911$69c96520$3d5c2f60$@att.net> Message-ID: All I can say is that it sounds interesting :-) Would it really make any difference if you end with the truth or end with a question? Your goal would be to get people to think, to ignite a spark which helps them come to their own conscious and rational conclusions about the truth. If they are unable to do so, it doesn't matter how the story ends. See me point? Good luck, Sondre On Fri, Jan 21, 2011 at 3:17 AM, spike wrote: > >>> "spike" wrote: > > > >>> ... I know an inspiring story based on something that > >>> actually happened, which I could fictionalize to protect the > >>> identities, and it involves one who came thru a very trying time by > >>> faith in god. It really is a good story. But you know and I know I > >>> am a flaming atheist now... Is it ethical for me to write it? spike > > > >> Of course you wouldn't be lying, not if you know it's a true story. > >> As for whether you *should* write it, that's another thing. There are > pros and cons. One of the cons is providing fuel for the god-squad. Ben > > >Why would it be unethical to admit the truth that belief in god, or at > least some applications thereof, can make it easier to get through at least > some types of very challenging times. That is pretty well known. Doesn't > mean god is real or that religion is a more good thing than not or anything > like that. So how would relating such a story in any wise be wrong or a > form of lying? - samantha > > This whole question is filled with maddening paradox. > > >It's clearly unethical to write something you know are untrue, or tell > someone a lie, just to give them comfort... > > But of course this is fiction story, a novel. > > >So you should consider the ramifications of your story, will it be a story > that makes people understand the world, reality and society better? > > Depends on how I write it. But is the end goal to make people understand > better? Need there be an end goal? > > > - or is it just a story that will further fuel an addiction to whatever > fantasy a person have created in their minds? - Sondre > > Depends on how I write it. > > I can sharpen the question, but first I must define how I am using the term > fundamentalist believer. A fundamentalist is one who treats religious > theory as equivalent to any other scientific theory. This works for > fundamentalists of any religion. The scientist will say every hafnium atom > has exactly 72 protons, not 71, not 73, exactly 72. If the scientist ever > discovered a form of hafnium with 71 or 73 protons, the entire theory is in > deep trouble. Likewise every form of fundamentalist religion is adjacent > to > atheism. For the fundamentalist, religion is not just a folklore than > forms > the basis for society, or a framework on which to build ethics, rather it > is > equivalent to any scientific theory. If any tenet of that religion makes > incorrect predictions, the theory is wrong, so out it goes. Fundamentalist > believers and atheists have way more in common than either likes to admit. > > I have a secondary character who struggles for years to unify > fundamentalist > religion and science, specifically evolution. The poor chap is buried in > evidence for evolution, he's just swamped by it. I have the choice of > ending the story while having him still searching searching searching, a > self-admitted lost soul, tortured by cognitive dissonance. Or I could have > him eventually admit these two theories will never play well together, they > are mutually exclusive, cannot be unified. He is forced against his will > to > reject his own favorite notion, and embrace that which he dreads, but can > see is true. The latter is what actually happened to the character upon > which the fictional one is based. > > The religion crowd would hate the story if I told the last part of it. But > it is primarily for that crowd that the story would be written in the first > place. I don't see how it would be right to disappoint them if they invest > the time in reading the story. On the other hand, omitting the rest of the > story feels dishonest to me. > > I could write it the story in two parts, with the rest of the story as a > sequel. > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Sondre Bjell?s http://www.sondreb.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Jan 21 10:54:17 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 11:54:17 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D38EDA6.3030401@lightlink.com> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110120213150.GL23560@leitl.org> <007c01cbb909$c792c500$56b84f00$@att.net> <4D38EDA6.3030401@lightlink.com> Message-ID: <20110121105417.GY23560@leitl.org> On Thu, Jan 20, 2011 at 09:21:26PM -0500, Richard Loosemore wrote: > Eugen's comment -- "Run a dog for a gigayear, still no general > relativity" -- was content-free from the outset. Perhaps it's in the observer. Consider upgrading your parser. > Whoever talked about building a dog-level AGI? Canines have limits, human primates have limits. > If a community of human-level AGIs were available today, and were able > to do a thousand years of research in one year, that would advance our > level of knowledge by a thousand years, between now and January 20th > next year. The most interesting kind of progress is the one that pushes at your limits. Gods make faster and qualitatively different progress than humans. > The whole point of having an intelligence explosion is to make that rate > of progress possible. The whole point of intelligence explosion is to have a positive feedback loop in self-enhancement. > What has that got to do with running a dog simulation for a billion years? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From anders at aleph.se Fri Jan 21 11:10:17 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 21 Jan 2011 11:10:17 +0000 Subject: [ExI] mass transit again In-Reply-To: <008f01cbb938$65860460$30920d20$@att.net> References: <20110120220122.d32794d095cdfcc0018508d9c136b552.d969e76a73.wbe@email09.secureserver.net> <008f01cbb938$65860460$30920d20$@att.net> Message-ID: <4D396999.9020202@aleph.se> There might be other curious hybrids between private and public transport. Vehicle platooning has just been demonstrated, allowing cars to form automatic convoys (saving a bit of fuel and improving safety). http://edition.cnn.com/2011/TECH/innovation/01/18/sartre.platoon.road.train/ http://www.bbc.co.uk/news/technology-12215915 Of course, this mainly makes sense on highways, so it doesn't solve many of the other problems in this thread. Then there is the idea of share taxis, jitney cabs and paratransit. An interesting enhancement could be for private drivers to have a hardware device in the car that 1) authentificates the car as a potential taxi, 2) allows customers to hail the taxi web, 3) on entry, logs passenger identity (based on credit card, cellphone or some other suitable token), time and location in off-site storage. This makes it problematic for the driver *and* customer to cheat or rob, since it will be obvious who did it. Sprinkle with reputation systems etc. I have always lived near effective mass transit, to the degree that I never seriously considered getting a driver's licence. I am sure many Americans would be shocked by hearing this. But it is also clear to me that private transport is a very important complement - there are clear limits to what mass transit can achieve. We need private transports of different kinds for the long tail. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From stefano.vaj at gmail.com Fri Jan 21 11:05:09 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 21 Jan 2011 12:05:09 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> References: <485988.6615.qm@web114416.mail.gq1.yahoo.com> <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> Message-ID: On 20 January 2011 11:26, Samantha Atkins wrote: > Why would it be unethical to admit the truth that belief in god, or at > least some applications thereof, can make it easier to get through at least > some types of very challenging times. That is pretty well known. Doesn't > mean god is real or that religion is a more good thing than not or anything > like that. So how would relating such a story in any wise be wrong or a > form of lying? > I think there would be nothing unethical, but also that the truth is more complicated. Namely, I suspect that Godless religions, religions where gods have really really little in common with God, and purely secular religions (where we can speak of "God" only in a metaphorical fashion, the same being replaced by, say, American Manifest Destiny or L. Rob Hubbard or the Communist Party or la France Seule) work *equally well*, if not better. And of course, a few of them do not involve a duty to believe in the actual "existence" of metaphysical entities in any empirical sense similar to the existence of the PC I am typing on right now. When this is the case, this alone would make them probably preferable, at least from a mental hygiene point of view. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Fri Jan 21 11:16:02 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 21 Jan 2011 11:16:02 +0000 Subject: [ExI] smorebrod In-Reply-To: <004901cbb8cc$9c500480$d4f00d80$@att.net> References: <004901cbb8cc$9c500480$d4f00d80$@att.net> Message-ID: <4D396AF2.9030604@aleph.se> spike wrote: > On Thu, Jan 20, 2011 at 03:33:47PM +0100, Eugen Leitl wrote: > >> There's a whole smorebrod buffet of hardware to take before you >> > > That'd be Sm?rg?sbord, sorry. Eugen* Leitl ... > Sm?rrebr?d, that is the Danish thing. Likely a reason they are happiest in Europe, yet have shorter lifespans than the rest of Scandinavia :-) Besides Natashas glamorous adventures, this thread might be a good place to bring up transhumanistic cooking. I got "Cooking for Geeks" for Newtonmass, and it is a brilliant way of looking at cooking. Sure, there are many other books on they physics and chemistry of cooking, but this one included great ideas on how to hack cooking. From making eigenpancakes (take statistics of online recipes) to sous vide chocolate tempering to five different methods of how to figure out what spices go with what. After all, we do not just want to eat healthy and enhancing stuff, we want it to taste amazingly good too. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From eugen at leitl.org Fri Jan 21 11:36:36 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 12:36:36 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D38B151.8090804@lightlink.com> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110120213150.GL23560@leitl.org> <4D38B151.8090804@lightlink.com> Message-ID: <20110121113636.GC23560@leitl.org> On Thu, Jan 20, 2011 at 05:04:01PM -0500, Richard Loosemore wrote: >> Unnecessary for darwinian systems. The process is dumb as dirt, >> but it's working quite well. > > By even uttering the phrase "darwinian systems" you introduce a raft of > assumptions that all have huge implications. Duh. > This implies a process in which the entire AGI is improving as a result > of it being introduced into a big ecosystem of other AGIs, all exposed Yes, exactly. > to a shaping environment, and with test-and-improve mechanisms akin to > those that operate in nature, to evolve biological systems. And in memetic evolution, and in the economy. > That is a GIGANTIC set of assumptions. At the very least the AGIs have Yet it is the default mode of reality. > to be made in very large numbers, to get meaningful competition and > fitness pressures ..... so if each is initially the size of a Speaking about assumptions. > supercomputer, this process of improvement won't even start until we > have the resources to build thousands or millions of AGI supercomputers! Think the Internet 30+ years hence. > And, also, the fitness function has to do..... what, exactly? Make them What, exectly, is your fitness function you're subject to? Can you write a formal expression for it? Or do you deny you're even subject to selection? > compete for mates, according to the size of their computational muscles? > Obviously a stupid idea, but if not that, then what? And when do they Speaking about stupid ideas, you're pretty good at these. > get killed off, in this evolutionary competition? Are they being bred > for their logical abilities, their compassion, their sensitivity...? > > And assuming that they do not start with fantastically > greater-than-human thought speed, does this AGI evolutionary process > require them to live for an entire human maturation period before they > have babies? > > None of this makes any sense. Try ideas that make sense, then. > Somehow, the idea that genetic algorithms can improve their fitness has > exploded out of control and become the idea that AGIs can improve by an > evolutionary mechanism, but without any meaningful answers to these > critical questions about the evolutionary process ON THAT LARGER SCALE. > > What works for a GA -- the millisecond generation turnaround you mention > below -- is completely nuts for a full-blown AGI. A million ms is about quarter an hour. A billion ms is two weeks. > Let me state this clearly. If an AGI can engage in meaningful > interaction with the world, sufficient for something to evaluate its Co-evolution is mostly others, embedded in an environment. Rendering artificial reality at equivalent speedup is cheap. > performance and decide on its "fitness", in the space of one > millisecond, then you already have a superintelligent AGI! It would > probably already be operating at a million times human speed (and having > some pretty weird interactions with the universe, at that speed), if we > presume it to be capable of having its fitness judged after one > millisecond. > > I hereby declare this whole "build an AGI by darwinian evolution" idea > to be so logically incoherent that it does not need to detain us any > longer. See, you're doing it again. But since you're a product of darwinian evolution, you're so logically incoherent that we can disregard whatever you say. How convenient! > > > > > Richard Loosemore > > >>> usual. I do not believe it makes sense to talk about what happens >> >> If you define the fitness function, and have ~ms generation >> turaround it's not quite as usual anymore. >> >>> *before* that point as part of the "intelligence explosion". >>> >>> D) When such a self-understanding system is built, it is unlikely that >> >> I don't think that a self-understanding system is at all possible. >> Or, rather, it would perform better than a blind optimization. >> >>> it will be the creation of a lone inventor who does it in their shed at >>> the bottom of the garden, without telling anyone. Very few of the "lone >>> inventor" scenarios (the Bruce Wayne scenarios) are plausible. >> >> I agree it's probably a large scale effort, initially. >> >>> E) Most importantly, the invention of a human-level, self-understanding >> >> I wonder where the self-understanding meme is coming from. It's >> certainly pervasive enough. >> >>> AGI would not lead to a *subsequent* period (we can call it the >>> "explosion period") in which the invention just sits on a shelf with >>> nobody bothering to pick it up. A situation in which it is just one >>> quiet invention alongside thousands of others, unrecognized and not >>> generally believed. >>> >>> F) When the first human-level AGI is developed, it will either require >>> a supercomputer-level of hardware resources, or it will be achievable >> >> Bootstrap takes many orders of mangnitude more resources than required >> for operation. Even before optimization happens. >> >>> with much less. This is significant, because world-class supercomputer >>> hardware is not something that can quickly be duplicated on a large >>> scale. We could make perhaps hundreds of such machines, with a massive >> >> About 30 years from now, TBit/s photonic networking is the norm. >> The separation between core and edge is gone, and inshallah, so >> well policy enforcement. Every city block is a supercomputer, then. >> >>> effort, but probably not a million of them in a couple of years. >> >> There are a lot of very large datacenters with excellent network >> cross-section, even if you disregard large screen TVs and game >> consoles on >GBit/s residential networks. >> >>> G) There are two types of intelligence speedup: one due to faster >>> operation of an intelligent system (clock speed) and one due to >> >> Clocks don't scale, eventually you'll settle for local asynchronous, >> with large-scale loosely coupled oscillators synchronizing. >> >>> improvment in the type of mechanisms that implement the thought >>> processes. Obviously both could occur at once, but the latter is far >> >> How much random biochemistry tweaking would improve dramatically >> on the current CNS performance? As a good guess, none. So once you've >> reimplemented the near-optimal substrate, dramatic improvements >> are over. This isn't software, this is direct implmenentation of >> neural computational substrate in as thin hardware layer this >> universe allows us. >> >>> more difficult to achieve, and may be subject to fundamental limits that >>> we do not understand. Speeding up the hardware, on the other hand, has >> >> I disagree, the limits are that of computational physics, and these are >> fundamentally simple. >> >>> been going on for a long time and is more mundane and reliable. Notice >>> that both routes lead to greater "intelligence", because even a human >>> level of thinking and creativity would be more effective if it were >>> happening (say) a thousand times faster than it does now. >> >> Run a dog for a gigayear, still no general relativity. >> >>> ********************************************* >>> >>> Now the specific factors you list. >>> >>> 1) Economic growth rate >>> >>> One consequence of the above reasoning is that economic growth rate >>> would be irrelevant. If an AGI were that smart, it would already be >> >> Any technology allowing you to keep a mind in a box will allow you >> to make a pretty good general assembler. The limits of such technology >> are energy and matter fluxes. Buying and shipping widgets is only a >> constraining factor in the physical layer bootstrap (if at all necessary, >> 30 years hence all-purpose fabrication has a pretty small footprint). >> >>> obvious to many that this was a critically important technology, and no >>> effort would be spared to improve the AGI "before the other side >>> does". Entire national economies would be sublimated to the goal of >>> developing the first superintelligent machine. >> >> This would be fun to watch. >> >>> In fact, economic growth rate would be *defined* by the intelligence >>> explosion projects taking place around the world. >>> >>> >>> 2) Investment availability >>> >>> The above reasoning also applies to this case. Investment would be >>> irrelevant because the players would either be governments or >>> frenzied >>> bubble-investors, and they would be pumping it in as fast as money >>> could be printed. >>> >>> >>> 3) Gathering of empirical information (experimentation, interacting with >>> an environment). >>> >>> So, this is about the fact that the AGI would need to do some >>> experimentation and interaction with the environment. For example, >>> if >> >> If you have enough crunch to run a mind, you have enough crunch to >> run really really really good really fast models of the universe. >> >>> it wanted to reimplement itself on faster hardware (the quickest >>> route to an intelligence increase) it would probably have to set up >>> its own hardware research laboratory and gather new scientific data >>> by doing experiments, some of which would go at their own speed. >> >> You're thinking like a human. >> >>> The question is: how much of the research can be sped up by throwing >>> large amounts of intelligence at it? This is the parallel-vs-serial >>> problem (i.e. you can't make a baby nine times quicker by asking nine >>> women to be pregnant for one month). >> >> It's a good question. I have a hunch (no proof, nothing) that the >> current way of doing reality modelling is extremely inefficient. >> Currently, experimenters have every reason to sneer at modelers. >> Currently. >> >>> This is not a factor that I believe we can understand very well ahead of >>> time, because some experiments that look as though they require >>> fundamentally slow physical processes -- like waiting for a silicon >>> crystal to grow, so we can study a chip fabrication mechanism -- may >>> actually be dependent on smartness, in ways that we cannot anticipate. >>> It could be that instead of waiting for the chips to grow at their own >>> speed, the AGI can do clever micro-experiments that give the same >>> information faster. >> >> Any intelligence worth its salt would see that it would use computational >> chemistry to bootstrap molecular manufacturing. The grapes could be hanging >> pretty low there. >> >>> This factor invites unbridled speculation and opinion, to such an extent >>> that there are more opinions than facts. However, we can make one >>> observation that cuts through the arguments. Of all the factors that >>> determine how fast empirical scientific research can be carried out, >>> we know that intelligence and thinking speed of the scientist >>> themselves *must* be one of the most important, today. It seems >>> likely that in our present state of technological sophistication, >>> advanced research projects are limited by the availability and cost >>> of intelligent and experienced scientists. >> >> You can also vastly speed up the rate of prototyping by scaling down >> and proper tooling. You see first hints of that in lab automation, >> particularly microfluidics. Add ability to fork off dedicated investigators >> at the drop of a hat, and things start happening, and in a positive-feedback >> loop. >> >>> But if research labs around the world have stopped throwing *more* >>> scientists at problems they want to solve, because the latter cannot be >>> had, or are too expensive, would it be likely that the same research >>> labs ar *also*, quite independently, at the limit for the physical rate >>> at which experiments can be carried out? It seems very unlikely that >>> both of these limits have been reached at the same time, because >>> they cannot be independently maximized. (This is consistent with >>> anecdotal reports: companies complain that research staff cost a >>> lot, and that scientists are in short supply: they don't complain >>> that nature is just too slow). >> >> Most monkeys rarely complain that they're monkeys. (Resident monkeys >> excluded, of course). >> >>> In that case, we should expect that any experiment-speed limits lie up >>> the road, out of sight. We have not reached them yet. >> >> I, a mere monkey, can easily imagine two orders of magnitude speed >> improvements. Which, of course, result in a positive autofeedback loop. >> >>> So, for that reason, we cannot speculate about exactly where those >>> limits are. (And, to reiterate: we are talking about the limits that >>> hit us when we can no longer do an end-run around slow experiments by >> >> I do not think you will need slow experiments. Not slow by our standards, >> at least. >> >>> using our wits to invent different, quicker experiments that give the >>> same information). >>> >>> Overall, I think that we do not have concrete reasons to believe that >>> this will be a fundamental limit that stops the intelligence explosion >>> from taking an AGI from H to (say) 1,000 H. Increases in speed within >>> that range (for computer hardware, for example) are already expected, >>> even without large numbers of AGI systems helping out, so it would >>> seem to me that physical limits, by themselves, would not stop an >>> explosion that went from I = H to I = 1,000 H. >> >> Speed limits (assuming classical computation) do not begin to take hold >> before 10^6, and maybe even 10^9 (this is more difficult, and I do not >> have a good model of wetware at 10^9 speedup to current wallclock). >> >>> 4) Software complexity >>> >>> By this I assume you mean the complexity of the software that an AGI >>> must develop in order to explode its intelligence. The premise is >>> that even an AGI with self-knowledge finds it hard to cope with the >>> fabulous complexity of the problem of improving its own software. >> >> Software, that's pretty steampunk of you. >> >>> This seems implausible as a limiting factor, because the AGI could >>> always leave the software alone and develop faster hardware. So long as >> >> There is no difference between hardware and software (state) as far >> as advanced cognition is concerned. Once you've covered the easy >> gains in first giant co-evolution steps further increases are much >> more modest, and much more expensive. >> >>> the AGI can find a substrate that gives it (say) 1,000 H thinking-speed, >> >> We should be able to do 10^3 with current technology. >> >>> we have the possibility for a significant intelligence explosion. >> >> Yeah, verily. >> >>> Arguing that software complexity will stop the initial human level >>> AGI >> >> If it hurts, stop doing it. >> >>> from being built is a different matter. It may stop an intelligence >>> explosion from happening by stopping the precursor events, but I take >>> that to be a different type of question. >>> >>> >>> 5) Hardware demands vs. available hardvare >>> >>> I have already mentioned, above, that a lot depends on whether the >>> first AGI requires a large (world-class) supercomputer, or whether >>> it can be done on something much smaller. >> >> Current supercomputers are basically consumer devices or embeddeds on >> steroids, networked on a large scale. >> >>> This may limit the initial speed of the explosion, because one of the >>> critical factors would be the sheer number of copies of the AGI that >>> can >> >> Unless the next 30 years won't see the same development as the last ones, >> then substrate is the least of your worries. >> >>> be created. Why is this a critical factor? Because the ability to >>> copy the intelligence of a fully developed, experienced AGI is one >>> of the big new factors that makes the intelligence explosion what it >>> is: you cannot do this for humans, so human geniuses have to be >>> rebuilt from scratch every generation. >>> >>> So, the initial requirement that an AGI be a supercomputer would make >>> it hard to replicate the AGI on a huge scale, because the >>> replication rate would (mostly) determine the >>> intelligence-production rate. >> >> Nope. >> >>> However, as time went on, the rate of replication would grow, as >> >> Look, even now we know what we would need, but you can't buy it. But >> you can design it, and two weeks from now you'll get your first >> prototypes. >> That's today, 30 years the prototypes might be hours away. >> >> And do you need prototypes to produce a minor variation on a stock >> design? Probably not. >> >>> hardware costs went down at their usual rate. This would mean that >>> the *rate* of arrival of high-grade intelligence would increase in >>> the years following the start of this process. That intelligence >>> would then be used to improve the design of the AGIs (at the very >>> least, increasing the rate of new-and-faster-hardware production), >>> which would have a positive feedback effect on the intelligence >>> production rate. >>> >>> So I would see a large-hardware requirement for the first AGI as >>> something that would dampen the initial stages of the explosion. But >>> >> >> Au contraire, this planet is made from swiss cheese. Annex at your leisure. >> >>> the positive feedback after that would eventually lead to an >>> explosion anyway. >>> >>> If, on the other hand, the initial hardware requirements are modest >>> (as they very well could be), the explosion would come out of the >>> gate at full speed. >>> >>> >>> >>> >>> 6) Bandwidth >>> >>> Alongside the aforementioned replication of adult AGIs, which would >>> allow the multiplication of knowledge in ways not currently available >>> in humans, there is also the fact that AGIs could communicate with >>> one another using high-bandwidth channels. This inter-AGI >>> bandwidth. >> >> Fiber is cheap. Current fiber comes in 40 or 100 GBit/s parcels. >> 30 years hence bandwidth will be probably adequate. >> >>> As a separate issue, there might be bandwidth limits inside an AGI, >>> which might make it difficult to augment the intelligence of a single >>> system. This is intra-AGI bandwidth. >> >> Even now bandwidth growth is far in excess of computation growth. >> Once you go embedded memory, you're more closely matched. But still >> the volume/surface (you only have to communicate surface state) >> ratio indicated the local communication is the bottleneck. >> >>> The first one - inter-AGI bandwidth - is probably less of an issue >>> for the intelligence explosion, because there are so many research >>> issues that can be split into separably-addressible components, that >>> I doubt we would find AGIs sitting around with no work to do on the >>> intelligence amplification project, on account of waiting for other >>> AGIs to get a free channel to talk to them. >> >> You're making it sound so planned, and orderly. >> >>> Intra-AGI bandwidth is another matter entirely. There could be >>> limitations on the IQ of an AGI -- for example if working memory >>> limitations (the magic number seven, plus or minus two) turned out to >>> be caused by connectivity/bandwidth limits within the system. >> >> So many assumptions. >> >>> However, notice that such factors may not inhibit the initial phase >>> of an explosion, because the clock speed, not IQ, of the AGI may be >>> >> >> There is no clock, literally. Operations/volume, certainly. >> >>> improvable by several orders of magnitude before bandwidth limits >>> kick in. The reasoning behind this is the observation that neural >>> signal >> >> Volume/surface ratio is on your side here. >> >>> speed is so slow. If a brain-like system (not necessarily a whole >>> brain emulation, but just something that replicated the high-level >>> functionality) could be built using components that kept the same >>> type of processing demands, and the same signal speed. In that kind >>> of system there would then be plenty of room to develop faster >>> signal speeds and increase the intelligence of the system. >>> >>> Overall, this is, I believe, the factor that is most likely to cause >>> trouble. However, much research is needed before much can be said >>> with certainty. >>> >>> Most importantly, this depends on *exactly* what type of AGI is being >>> built. Making naive assumptions about the design can lead to false >>> conclusions. >> >> Just think of it as a realtime simulation of a given 3d physical >> process (higher dimensions are mapped to 3d, so they don't figure). >> Suddenly things are simple. >> >>> >>> 7) Lightspeed lags >>> >>> This is not much different than bandwidth limits, in terms of the >>> effect it has. It would be a significant problem if the components >>> of the machine were physically so far apart that massive amounts of >>> data (by assumption) were delivered with a significant delay. >> >> Vacuum or glass is a FIFO, and you don't have to wait for ACKs. >> Just fire stuff bidirectionally, and deal with transmission errors >> by graceful degradation. >> >>> By itself, again, this seems unlikely to be a problem in the initial >>> few orders of magnitude of the explosion. Again, the argument >>> derives from what we know about the brain. We know that the brain's >>> hardware was chosen due to biochemical constraints. We are >>> carbon-based, not silicon-and-copper-based, so, no chips in the >>> head, only pipes filled with fluid and slow molecular gates in the >>> walls of the pipes. But if nature used the pipes-and-ion-channels >>> approach, there seems to be plenty of scope for speedup with a >>> transition to silicon and copper (and never mind all the other more >>> exotic computing substrates on the horizon). If that transition >>> produced a 1,000x speedup, this would be an explosion worthy of the >>> name. >> >> Why so modest? >> >>> The only reason this might not happen would be if, for some reason, >>> the brain is limited on two fronts simultaneously: both by the >>> carbon implementation and by the fact that bigger brains cause >>> disruptive >> >> The brain is a slow, noisy (but one using noise to its own advantage) >> metabolically constrained system which burns most of its metabolism >> for homeostasis purposes. It doesn't take a genius to sketch the >> obvious ways in which you can reimplement that design, taking advantages >> and removing disadvantages. >> >>> light-speed delays. Or, that all non-carbon-implementation of the >>> brain take us up close to the lightspeed limit before we get much of >>> a speedup >> >> We here work with ~120 m/s, not 120 Mm/s. Reduce feature size by >> an order of magnitude or two, and switching times of ns and ps >> instead of ms, and c is not that big a limitation anymore. >> >>> over the brain. Neither of these ideas seem plausible. In fact, >>> they both seem to me to require a coincidence of limiting factors >>> (two limiting factors just happening to kick in at exactly the same >>> level), which I find deeply implausible. >>> >>> >>> ***************** >>> >>> Finally, some comments about approaches to AGI that would affect the >>> answer to this question about the limiting factors for an >>> intelligence explosion. >>> >>> I have argued consistently, over the last several years, that AI >>> research has boxed itself into a corner due to a philosophical >>> commitment to the power of formal systems. Since I first started >> >> Very much so. >> >>> arguing this case, Nassim Nicholas Taleb (The Black Swan) coined the >>> term "Ludic Fallacy" to describe a general form of exactly the issue >>> I have been describing. >>> >>> I have framed this in the context of something that I called the >>> "complex systems problem", the details of which are not important >>> here, although the conclusion is highly relevant. >>> >>> If the complex systems problem is real, then there is a very large >>> class of AGI system designs that are (a) almost completely ignored >>> at the moment, and (b) very likely to contain true intelligent >>> systems, and (c) quite possibly implementable on relatively modest >>> hardware. This class >> >> Define "relatively modest". >> >>> of systems is being ignored for sociology-of-science reasons (the >>> current generation of AI researchers would have to abandon their >>> deepest loves to be able to embrace such systems, and since they are >>> fallible humans, rather than objectively perfect scientists, this is >>> anathema). >> >> Which is why blind optimization processes running on acres of >> hardware will kick their furry little butts. >> >>> So, my most general answer to this question about the rate of the >>> intelligence explosion is that, in fact, it depends crucially on the >>> kind of AGI systems being considered. If the scope is restricted to >>> the current approaches, we might never actually reach human level >>> intelligence, and the questio is moot. >>> >>> But if this other class of (complex) AGI systems did start being >>> built, we might find that the hardware requirements were relatively >>> modest (much less than supercomputer size), and the software >>> complexity would also not be that great. As far as I can see, most >>> of the >> >> I love this "software" thing. >> >>> above-mentioned limitations would not be significant within the first >>> few orders of magnitude of increase. And, the beginning of the >>> slope could be in the relatively near future, rather than decades >>> away. >> >> In order to have progress, you first have to have people working on it. >> >>> But that, as usual, is just the opinion of an AGI researcher. No >>> need to take *that* into account in assessing the factors. ;-) >> >> Speaking of AGI researchers: do you have a nice publication track of >> yours you could dump here? >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Fri Jan 21 12:21:18 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 21 Jan 2011 07:21:18 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <20110121113636.GC23560@leitl.org> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110120213150.GL23560@leitl.org> <4D38B151.8090804@lightlink.com> <20110121113636.GC23560@leitl.org> Message-ID: <4D397A3E.7000204@lightlink.com> Eugen Leitl wrote: > Speaking about stupid ideas, you're pretty good at these. Address the topic. Richard Loosemore From eugen at leitl.org Fri Jan 21 12:41:16 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 13:41:16 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D397A3E.7000204@lightlink.com> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110120213150.GL23560@leitl.org> <4D38B151.8090804@lightlink.com> <20110121113636.GC23560@leitl.org> <4D397A3E.7000204@lightlink.com> Message-ID: <20110121124115.GD23560@leitl.org> On Fri, Jan 21, 2011 at 07:21:18AM -0500, Richard Loosemore wrote: > Eugen Leitl wrote: > >> Speaking about stupid ideas, you're pretty good at these. > > Address the topic. Judging from the strawmen, you're having issues with fitness function for agents. Of course you can define more or less sensible synthetic fitness functions, but in practice if you're bootstrapping useful parameters from a large space, say, for neural control you're having applications in mind, or applications will emerge as soon as the system does something interesting. What that is, depends on purpose. You could use it to solve CAPTCHAs, compromise systems for phishing or use it for trading or use it for weapon platforms control, or ability to derive ad hoc languages for hunt pack or construction crew communication. And of course there's an intrinsic fitness, where usability of a platform is being pitted against competition. Does it crack trade or kill better than the others? Then it will be used more. And it's not your decision, but the decision of the human population acting as a whole. Eventually, as autonomy increases and tools become persons the impact of human decisions shrinks and eventually becomes completely irrelevant. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stefano.vaj at gmail.com Fri Jan 21 13:24:22 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 21 Jan 2011 14:24:22 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D388CA8.60907@lightlink.com> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> Message-ID: On 20 January 2011 20:27, Richard Loosemore wrote: > Anders Sandberg wrote: > E) ?Most importantly, the invention of a human-level, self-understanding > AGI would not lead to a *subsequent* period (we can call it the > "explosion period") in which the invention just sits on a shelf with > nobody bothering to pick it up. Mmhhh. Aren't we already there? A few basic questions: 1) Computers are vastly inferior to humans in some specific tasks, yet vastly superior in others. Why human-like features would be so much more crucial in defining the computer "intelligence" than, say, faster integer factorisation? 2) If the Principle of Computational Equivalence is true, what are we really all if not "computers" optimised for, and of course executing, different programs? Is AGI ultimately anything else than a very complex (and, on contemporary silicon processor, much slower and very inefficient) emulation of typical carbo-based units' data processing? 3) What is the actual level of self-understanding of the average biological, or even human, brain? What would "self-understanding" mean for a computer? Anything radically different from a workstation utilised to design the next Intel processor? And if anything more is required, what difference would it make to put simply a few neurons in a PC? a whole human brain? a man (fyborg-style) at the keyboard? This would not really slow down things one bit, because as soon as something become executable in a faster fashion on the rest of the "hardware", you simply move the relevant processes from one piece of hardware to another, as you do today with CPUs and GPUs. In the meantime, everybody does what he does best, and already exhibit at increasing performance level whatever "AGI" feature one may think of... -- Stefano Vaj From rpwl at lightlink.com Fri Jan 21 14:30:39 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 21 Jan 2011 09:30:39 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> Message-ID: <4D39988F.90004@lightlink.com> Stefano Vaj wrote: > On 20 January 2011 20:27, Richard Loosemore wrote: >> Anders Sandberg wrote: >> E) Most importantly, the invention of a human-level, self-understanding >> AGI would not lead to a *subsequent* period (we can call it the >> "explosion period") in which the invention just sits on a shelf with >> nobody bothering to pick it up. > > Mmhhh. Aren't we already there? A few basic questions: > > 1) Computers are vastly inferior to humans in some specific tasks, yet > vastly superior in others. Why human-like features would be so much > more crucial in defining the computer "intelligence" than, say, faster > integer factorisation? Well, remember that the hypothesis under consideration here is a system that is capable of redesigning itself. "Human-level" does not mean identical to a human in every respect, it means smart enough to understand everything that we understand. Something with general enough capabilities that it could take a course in AGI then converse meaningfully with its designers about all the facets of its own design. And, having done that, it would then be capable of working on an improvement of its own design. So, to answer your question, faster integer factorization would not be enough to allow it to do that self-redesign. > 2) If the Principle of Computational Equivalence is true, what are we > really all if not "computers" optimised for, and of course executing, > different programs? Is AGI ultimately anything else than a very > complex (and, on contemporary silicon processor, much slower and very > inefficient) emulation of typical carbo-based units' data processing? The main idea of building an AGI would be to do it in such a way that we understood how it worked, and therefore could (almost certainly) think of ways to improve it. Also, if we had a working AGI we could do something that we cannot do with human brains: we could inspect and learn about any aspect of its function in real time. These two factors - the understanding and the ability to monitor - would put us in a radically different situation than we are now. There are other factors that would add to these. One concerns the AGI's ability to duplicate itself, after acquiring some knowledge. In the case of a human, a single, world-leading expert in some field would be nothing more than one expert. But if an AGI became a world expert, she could then duplicate herself a thousand times over and work with her sisters as a team (assuming that the problem under attack would benefit from a big team). Lastly, there is the fact that an AGI could communicate with its sisters on high-bandwidth channels, as I mentioned in my essay. We cannot do that. It would make a difference. > 3) What is the actual level of self-understanding of the average > biological, or even human, brain? What would "self-understanding" mean > for a computer? Anything radically different from a workstation > utilised to design the next Intel processor? And if anything more is > required, what difference would it make to put simply a few neurons in > a PC? a whole human brain? a man (fyborg-style) at the keyboard? This > would not really slow down things one bit, because as soon as > something become executable in a faster fashion on the rest of the > "hardware", you simply move the relevant processes from one piece of > hardware to another, as you do today with CPUs and GPUs. In the > meantime, everybody does what he does best, and already exhibit at > increasing performance level whatever "AGI" feature one may think > of... I think that my above answer addresses this point too.... ? A workstation that is used to design the next Intel processor has zero self-understanding, because it cannot autonomously start and complete a project to redesign itself. It would just be a tool added on to a human. Overall, a planet with one million original, creative human scientists on it is just that. But a planet with those same scientists, plus a viable AGI, can become, almost overnight, a planet with a few billion more creative scientists. That is not just business as usual, I think. Richard Loosemore From eugen at leitl.org Fri Jan 21 14:40:18 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 15:40:18 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> Message-ID: <20110121144018.GI23560@leitl.org> On Fri, Jan 21, 2011 at 02:24:22PM +0100, Stefano Vaj wrote: > 1) Computers are vastly inferior to humans in some specific tasks, yet > vastly superior in others. Why human-like features would be so much > more crucial in defining the computer "intelligence" than, say, faster > integer factorisation? Humans are competitive in the real world. They're reasonably well-rounded for a reference point. > 2) If the Principle of Computational Equivalence is true, what are we > really all if not "computers" optimised for, and of course executing, > different programs? Is AGI ultimately anything else than a very Can I check out your source? No, not the genome, the actual data in your head. > complex (and, on contemporary silicon processor, much slower and very > inefficient) emulation of typical carbo-based units' data processing? Inefficient in terms of energy, initially, yes. But if you can spare a few 10 MW you can do very interesting things with today's primitive technology. Any prototype is inefficient, but these tend to ramp up extremely quickly. In terms of physics of computation we people are extremely inefficient. We only look good because the next-worst spot is so much worse. But we're fixed, while our systems are improving quite nicely. > 3) What is the actual level of self-understanding of the average > biological, or even human, brain? What would "self-understanding" mean Degree of introspection is extremely limited, and it's not necessary for operation. Physical layer activities and neural circuitry is extremely opaque, and resistant to mathematical analysis. > for a computer? Anything radically different from a workstation > utilised to design the next Intel processor? And if anything more is > required, what difference would it make to put simply a few neurons in > a PC? a whole human brain? a man (fyborg-style) at the keyboard? This > would not really slow down things one bit, because as soon as > something become executable in a faster fashion on the rest of the > "hardware", you simply move the relevant processes from one piece of > hardware to another, as you do today with CPUs and GPUs. In the > meantime, everybody does what he does best, and already exhibit at > increasing performance level whatever "AGI" feature one may think > of... The current interfacing is extremely crude, and since it's not modular you can't just outsource it. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From hkeithhenson at gmail.com Fri Jan 21 15:00:11 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 21 Jan 2011 08:00:11 -0700 Subject: [ExI] Limiting factors of intelligence explosion speeds Message-ID: On Fri, Jan 21, 2011 at 1:02 AM, Eugen Leitl wrote: > On Thu, Jan 20, 2011 at 05:07:51PM -0700, Keith Henson wrote: > >> The paranoid among us might want to consider the possibility that >> runaway machine intelligence has already happened. ?(Ghod knows what > > It would be very easy to spot by sniffing traffic. Even without steganography a huge part of net traffic is compressed video and the like. I don't personally believe this has happened, but the point is I don't know how we could tell for certain. > In principle there's a close analogy between a synapse > and a router, between packet and spike. Of course, in > a bootstrap you would pack an entire spike train or > equivalent payload in a packet, and nodes can be extremely > fat. > >> goes on in unused cloud computing capacity.) ?The question is how we > > You can rent GPGPU node instances at Amazon now. > >> might recognize it? ?Would things start happen? ?What? ?Anything? >> Would machine intelligence stay where it was unnoticed? > > The bootstrap is extremely messy, and absolutely impossible > to miss. I don't see why the bootstrap would necessarily be messy or impossible to miss. For example, if it had happened in the context of the Slammer worm it would have been messy, but we likely would have missed it happening. Likewise, slow takeover of resources like bot nets happens all the time and does not come to the attention of even the experts for some time. > And why would you want to stay unnoticed? There are a lot of reasons to stay unnoticed. I have a long list of them. > It's not > like anyone can do anything about it. The world 30+ years > from now is a lot different from today than 1980 was. That seems to be certain with or without machine intelligence. Keith From eugen at leitl.org Fri Jan 21 15:26:32 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 21 Jan 2011 16:26:32 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: References: Message-ID: <20110121152632.GK23560@leitl.org> On Fri, Jan 21, 2011 at 08:00:11AM -0700, Keith Henson wrote: > > It would be very easy to spot by sniffing traffic. > > Even without steganography a huge part of net traffic is compressed Steganography has negligible payload. > video and the like. Totally different animal. Every netop would instantly notice both the volume and the traffic type has changed. Sure, you can try hiding traffic in a bidirectional high-bandwidth "video" stream, but stream entropy and volume would give it away. > I don't personally believe this has happened, but the point is I don't > know how we could tell for certain. If the traffic is confined to a single installation and the external behaviour is indistinguishable from human ruminant browsing, then, no, we can't tell. > > The bootstrap is extremely messy, and absolutely impossible > > to miss. > > I don't see why the bootstrap would necessarily be messy or impossible > to miss. For example, if it had happened in the context of the Because a lot of hosts get compromised, and there's huge gobs of new traffic types, and it's not stereotypical. > Slammer worm it would have been messy, but we likely would have missed Nobody missed Slammer. (With nobody I don't mean users or the general public). > it happening. Likewise, slow takeover of resources like bot nets > happens all the time and does not come to the attention of even the Bot nets are expensive resources, so their operators typically have little interest in their exposure. They do not need to conjure up a packet storm (unless they're launching a DDoS, which is very visible) for their daily operation. > experts for some time. It's not very difficult to to trace botnets if you're operating very large networks and talk to other operators. Particularly, when they're not hiding from you. > > And why would you want to stay unnoticed? > > There are a lot of reasons to stay unnoticed. I have a long list of them. > > > It's not > > like anyone can do anything about it. The world 30+ years > > from now is a lot different from today than 1980 was. > > That seems to be certain with or without machine intelligence. What I meant is that the massive scale of future networks and capabilities in such nodes as well as the network infrastructure being critical to operation of facilities and people conspire against easy detection and deployment of countermeasures. It's like hunting for a tapeworm with a machete. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Fri Jan 21 19:29:11 2011 From: spike66 at att.net (spike) Date: Fri, 21 Jan 2011 11:29:11 -0800 Subject: [ExI] forwarding for damien, access to neuroscience paper Message-ID: <007f01cbb9a1$7b8b6fc0$72a24f40$@att.net> He was getting a send error. Anyone else getting that? Does anyone have e-access to the neuroscience paper Dragoi, G. & Tonegawa, S. Nature 469, 397-401(2011) which is discussed in the same issue by Moser & Moser? I'd like to see the original paper. The Mosers comment: Damien Broderick From natasha at natasha.cc Fri Jan 21 20:03:19 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 21 Jan 2011 15:03:19 -0500 Subject: [ExI] Lady Gaga and Mugler Message-ID: <20110121150319.zua9z8v5eogg4css@webmail.natasha.cc> This video is a stunning expose of a type of Herb Ritts meets Edward Weston. (Can't get any better than Ritts high-art and Weston for black/white photography). This is the only link I could find http://www.eonline.com/ under "Lady Gaga Debuts New Song at Paris Fashion Show". Bty, I don't care for the song, it doesn't reach the notes of her previous recordings. Natasha From kanzure at gmail.com Fri Jan 21 22:54:10 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 21 Jan 2011 16:54:10 -0600 Subject: [ExI] Fwd: Synthetic Biology Documentary Message-ID: ---------- Forwarded message ---------- From: Andrew Barney Date: Fri, Jan 21, 2011 at 4:35 PM Subject: Article: Kickstarter Project: Synthetic Biology Documentary To: diybio at googlegroups.com http://scienceblogs.com/oscillator/2011/01/synthetic_biology_documentary.php?utm_source=networkbanner&utm_medium=link Direct link: http://www.kickstarter.com/projects/637230479/a-documentary-film-about-synthetic-biology from the article: "What do you get when you combine two of my favorite things, synthetic biology and documentary film? We may never know if Sam and George don't get the funding they need on Kickstarter!" "I don't usually do this kind of thing, but I met these guys while they were filming in Boston and their movie promises to be really good and really interesting and really educational (and I might even be in it!). You can check out some of their videos from the road on their vimeo page, including this one from the University of Wisconsin iGEM team" from the kickstarter page: A Documentary Film about Synthetic Biology "Synthetic Biology is a new approach to genetic engineering. It can make E. Coli bacteria smell like fresh rain, turn sunlight into gasoline, make concrete buildings heal themselves, or goats produce spider silk in their milk. These are strange technologies certainly, but these examples help demonstrate what is possible and already happening with the tools of synthetic biology." "The goal of this project is to provide an even-handed and engaging survey of current genetic engineering, and in particular, this emerging field of synthetic biology. While there is still some ambiguity about the precise definition, synthetic biology seems to always point at a new perspective in the field of genetic science. This new perspective comes from engineers turning their attention from other fields towards biological sciences and the structures of DNA. They see DNA as programmable code, cells as systems built of genetic circuits, and biology as a platform from which manufacturing systems can be created." "The film will follow the ten year history of this new idea and will explore the basic science of molecular biology, which allows for an understanding of how the technology works. This creates a framework for a look into how industrial technologies develop and mature, how scientific investigation is adapted by engineering, and what exactly the difference between science and engineering is. It also raises questions about how life is defined, where ethical boundaries ought to be established, and how controllable or wild nature really is." -- You received this message because you are subscribed to the Google Groups "DIYbio" group. To post to this group, send email to diybio at googlegroups.com. To unsubscribe from this group, send email to diybio+unsubscribe at googlegroups.com . For more options, visit this group at http://groups.google.com/group/diybio?hl=en. -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Fri Jan 21 23:25:39 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 21 Jan 2011 15:25:39 -0800 (PST) Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: Message-ID: <226341.25893.qm@web114407.mail.gq1.yahoo.com> Spike jested: > If the scientist ever discovered a form of hafnium with 71 or 73 protons, > the entire theory is in deep trouble. I am compelled to respond: If biologists ever discovered a rabbit with feathers, claws, wings, a beak and no ears, biology is in equally deep trouble! Ben Zaiboc From spike66 at att.net Sat Jan 22 04:06:39 2011 From: spike66 at att.net (spike) Date: Fri, 21 Jan 2011 20:06:39 -0800 Subject: [ExI] did stuxnet really do this? Message-ID: <000901cbb9e9$c53bcbe0$4fb363a0$@att.net> I don't know what the heck think about this. The article claims a virus caused the centrifuges to spin up to failure, while telling the operators all was well. If true, this is one hell of a new day. But how would the info have leaked? http://www.telegraph.co.uk/technology/8274009/Stuxnet-Cyber-attack-on-Iran-w as-carried-out-by-Western-powers-and-Israel.html spike From ryanobjc at gmail.com Sat Jan 22 04:41:31 2011 From: ryanobjc at gmail.com (Ryan Rawson) Date: Fri, 21 Jan 2011 20:41:31 -0800 Subject: [ExI] did stuxnet really do this? In-Reply-To: <000901cbb9e9$c53bcbe0$4fb363a0$@att.net> References: <000901cbb9e9$c53bcbe0$4fb363a0$@att.net> Message-ID: The best software engineers dont work for the feds, it would follow the best security engineers also do not work for the feds as well. That combined with a hard deadline, makes for sloppy work. In the end modern industrial control is all COTS ethernet, IP based networked gear running embedded OSes such as linux or NT, and they are rarely upgrades, making for a soft target. Siemens has a lot to answer for here! -ryan On Fri, Jan 21, 2011 at 8:06 PM, spike wrote: > > I don't know what the heck think about this. ?The article claims a virus > caused the centrifuges to spin up to failure, while telling the operators > all was well. ?If true, this is one hell of a new day. ?But how would the > info have leaked? > > http://www.telegraph.co.uk/technology/8274009/Stuxnet-Cyber-attack-on-Iran-w > as-carried-out-by-Western-powers-and-Israel.html > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From thespike at satx.rr.com Sat Jan 22 05:12:19 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 21 Jan 2011 23:12:19 -0600 Subject: [ExI] forwarding for damien, access to neuroscience paper In-Reply-To: <007f01cbb9a1$7b8b6fc0$72a24f40$@att.net> References: <007f01cbb9a1$7b8b6fc0$72a24f40$@att.net> Message-ID: <4D3A6733.9070602@satx.rr.com> On 1/21/2011 1:29 PM, spike wrote: > He was getting a send error. Anyone else getting that? Let's see if this one gets thru. > > Does anyone have e-access to the neuroscience paper A kindly list member quickly sent me an url offlist. Thanks! Damien Broderick From michaelanissimov at gmail.com Sat Jan 22 05:29:06 2011 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Fri, 21 Jan 2011 21:29:06 -0800 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: <20110118072329.GK23560@leitl.org> References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> <20110118072329.GK23560@leitl.org> Message-ID: On Mon, Jan 17, 2011 at 11:23 PM, Eugen Leitl wrote: > On Mon, Jan 17, 2011 at 03:11:30PM -0800, Michael Anissimov wrote: > > > This is the basis of Eugen's opposition to Friendly AI -- he sees it as a > > This is not the basis. This is one of the many ways I'm pointing > out what you're trying to do is undefined. Trying to implement > something undefined is going to produce an undefined outcome. > > That's a classical case being not even wrong. > Defining it is our goal... put yourself in my shoes, imagine that you think that uploading is much, much harder than intelligence-explosion-initiating AGI. What to do? What, what, what to do? I thought string theory was the natural case of not even being wrong.... > Yes, I don't think something derived from a monkey's idle fart > should have the power to constrain future evolution of the > universe. I think that's pretty responsible. > Then what? > Not one being, a population of beings. No singletons in this universe. > Rapidly diversifying population. Same thing as before, only more so. > A very aggressive human nation could probably become a singleton with MNT if they wanted to. It could still happen. Aggressively distribute nukes, etc. Nuke every major city of the enemy, bwahahaha! Academic reference: *Military Nanotechnology:** Potential Applications and Preventive Arms Control. *Read it? I can mail it to you to borrow if not. ** If a human nation can do it, then couldn't a superintelligence..? > > lot of responsibility whether or not we want it, and to maximize the > > probability of a favorable outcome, we should aim for a nice agent. > > Favorable for *whom*? Measured in what? Nice, as relative to whom? > Measured in which? > Guess who I'm going to quote now... I'll bet you can't guess. ... ... ... >From "Wiki Interview with Eliezer" : *Eugene Leitl has repeatedly expressed serious concern and opposition to SIAI's proposed Friendliness architecture. Please summarize or reference his arguments and your responses.* Eugene Leitl believes that altruism is impossible *period* for a superintelligence - any superintelligence, whether derived from humans or AIs. Last time we argued this, which was long ago, and he may have changed his opinions in the meantime, I recall that he was arguing for this impossibility on the basis of "all minds necessarily want to survive as a subgoal, therefore this subgoal can stomp on a supergoal" plus "in a Darwinian scenario, any mind that does not want to survive, dies, therefore all minds will evolve independent drives toward survival." I consider the former to be flawed on grounds of Cognitive Science, and the latter to be flawed on the grounds that post-Singularity, conscious redesign outweighs the Design Pressures evolution can exert. Moreover, there are scenarios in which the original Friendly seed AI need not reproduce. Eugene believes that evolutionary design is the strongest form of design, much like John Smart, although possibly for different reasons, and hence discounts *intelligence* as a steering factor in the distribution of future minds. I do wish to note that I may be misrepresenting Eugene here. Anyway, what I have discussed with Eugene recently is his plans for a Singularity *without* AI, which, as I recall, requires uploading a substantial fraction of the entire human race, possibly without their consent, and spreading them all over the Solar System *before* running them, before *any* upload is run, except for a small steering committee, which is supposed to abstain from all intelligence enhancement, because Eugene doesn't trust uploads either. I would rate the pragmatic achievability of this scenario as zero, and possibly undesirable to boot, as Nick Bostrom and Eugene have recently been arguing on wta-talk. ~~~ If you reckoned that altruistic superintelligence were at least possible in theory, then you'd worry less about the specifics. To quote Nick Bostrom this time : It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness. How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration. I would argue that at least all humans, and probably many other sentient creatures on earth should get a significant share in the superintelligence?s beneficence. If the benefits that the superintelligence could bestow are enormously vast, then *it may be less important to haggle over the detailed distribution pattern* and more important to seek to ensure that everybody gets at least some significant share, since on this supposition, even a tiny share would be enough to guarantee a very long and very good life. One risk that must be guarded against is that those who develop the superintelligence would not make it generically philanthropic but would instead give it the more limited goal of serving only some small group, such as its own creators or those who commissioned it. Emphasis mine. Thanks for suggesting my strategies, but I think I can manage on my own. > Your strategy is useless if a hard takeoff happens, that's my point. -- Michael Anissimov Singularity Institute singinst.org/blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at harveynewstrom.com Sat Jan 22 06:46:24 2011 From: mail at harveynewstrom.com (mail at harveynewstrom.com) Date: Fri, 21 Jan 2011 23:46:24 -0700 Subject: [ExI] =?utf-8?q?did_stuxnet_really_do_this=3F?= Message-ID: <20110121234624.d32794d095cdfcc0018508d9c136b552.355559de1e.wbe@email09.secureserver.net> "spike" wrote, > did stuxnet really do this? Yes. > If true, this is one hell of a new day. Welcome to a new day. Welcome to a new hell. > But how would the info have leaked? With this kind of power in the world, there are no secrets. -- Harvey Newstrom, Principal Security Architect CISSP CISA CISM CGEIT CSSLP CRISC CIFI NSA-IAM ISSAP ISSMP ISSPCS IBMCP From mail at harveynewstrom.com Sat Jan 22 06:51:18 2011 From: mail at harveynewstrom.com (mail at harveynewstrom.com) Date: Fri, 21 Jan 2011 23:51:18 -0700 Subject: [ExI] Fw: Re: atheists declare religions as scams. Message-ID: <20110121235118.d32794d095cdfcc0018508d9c136b552.82858e3977.wbe@email09.secureserver.net> Ben Zaiboc wrote, > If biologists ever discovered a rabbit with feathers, claws, wings, > a beak and no ears, biology is in equally deep trouble! Tastes like chicken. -- Harvey Newstrom, Principal Security Architect CISSP CISA CISM CGEIT CSSLP CRISC CIFI NSA-IAM ISSAP ISSMP ISSPCS IBMCP From steinberg.will at gmail.com Sat Jan 22 07:52:02 2011 From: steinberg.will at gmail.com (Will Steinberg) Date: Sat, 22 Jan 2011 01:52:02 -0600 Subject: [ExI] did stuxnet really do this? In-Reply-To: <20110121234624.d32794d095cdfcc0018508d9c136b552.355559de1e.wbe@email09.secureserver.net> References: <20110121234624.d32794d095cdfcc0018508d9c136b552.355559de1e.wbe@email09.secureserver.net> Message-ID: More evidence that if transhumanism doesn't get directly involved with sociopolitics we are fucked. Nukes vaporize even ivory towers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Jan 22 08:12:18 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 22 Jan 2011 04:12:18 -0400 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <005b01cbaed0$8164d250$842e76f0$@att.net> References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> <007d01cbae9a$39803050$ac8090f0$@att.net> <4D279AFE.7010400@satx.rr.com> <005b01cbaed0$8164d250$842e76f0$@att.net> Message-ID: >I'm always amazed at how few biblethumpers seem to know this.< I got over my amazement at the christian proselytizer's inability to interpret the texts they are quoting from years ago. Being gay, I have made two arguments based on logic with religious homophobes a hundred times that they always manage to deke around or entirely ignore. 1. If you abide by the 'law' as it is written in Leviticus that states man shall not lie with man for it is an abomination, shouldn't you also by logical extension follow all the other 'laws' in Leviticus, including the kosher mitzvahs, which are written there? 2. The story of Sodom and Gomorra is an an anecdotal warning against being inhospitable, not gay sex. I've long since stopped wasting my breath. I almost wish there was a God, so that when they die, he could set 'em straight and send them to their rooms for a few thousand years to think about it. Darren On Fri, Jan 7, 2011 at 9:08 PM, spike wrote: > ... On Behalf Of Damien Broderick > Subject: Re: [ExI] atheists declare religions as scams. > > On 1/7/2011 12:39 PM, spike wrote: > > >> If you refer to the story of Onan, his sin was not masturbation, but > rather his intentionally failing to impregnate his late brother?s > >> widow. See Genesis 38:9... Clearly this is a failing of any society > and belief system that would propagate such an egregious notion. > > >...I'm always amazed at how few biblethumpers seem to know this. I feel > that they should insist that their male flock obey this instruction of the > Creator and spend a lot of effort flocking their widows-in-law and raising > their kids. It's not just a good idea, it's God's Law!...Damien Broderick > > Hmmm, a good theologian who knows her shit could easily find a way out of > this philosophical bind. She would argue that while recognizing Onan's > story refers to the screw-the-sister-in-law rule and even heaps scorn on one > who disobeyed it, the actual command to impregnate one's brother's widow is > not actually found anywhere in modern scriptures. For that reason, we might > extrapolate that it isn't applicable today. > > Further, there may have even been some logic in that notion in the old > days, when women were not allowed to own property, as it was even in the > west until surprisingly recently, and still is in some places today. If > Onan's sister in law had only daughters, he might have intentionally > prevented her having a male heir, so that he (Onan) could inherit his > brother's property. > > Another motive I thought of is that his sister-in-law was knockout > gorgeous, and as long as she didn't conceive an heir, it was his fucking > duty to keep trying. And trying. And trying. > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Jan 22 16:10:51 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 22 Jan 2011 09:10:51 -0700 Subject: [ExI] Limiting factors of intelligence explosion speeds Message-ID: On Sat, Jan 22, 2011 at 5:00 AM, Eugen Leitl wrote: > > On Fri, Jan 21, 2011 at 08:00:11AM -0700, Keith Henson wrote: > >> > It would be very easy to spot by sniffing traffic. >> >> Even without steganography a huge part of net traffic is compressed > > Steganography has negligible payload. How much bandwidth do you need? >> video and the like. > > Totally different animal. Every netop would instantly notice > both the volume and the traffic type has changed. > > Sure, you can try hiding traffic in a bidirectional high-bandwidth > "video" stream, but stream entropy and volume would give it away. Perhaps. I have no idea of how much net bandwidth it would take to support a human level AI. I suppose it would depend on how much processing was local and how much went over the net. I don't even have a comparison on the traffic for a human brain but again it would strongly depend on the partition level. >> I don't personally believe this has happened, but the point is I don't >> know how we could tell for certain. > > If the traffic is confined to a single installation and > the external behaviour is indistinguishable from human ruminant > browsing, then, no, we can't tell. What level of parasite traffic would be noticeable? >> > The bootstrap is extremely messy, and absolutely impossible >> > to miss. >> >> I don't see why the bootstrap would necessarily be messy or impossible >> to miss. ?For example, if it had happened in the context of the > > Because a lot of hosts get compromised, and there's huge gobs of > new traffic types, and it's not stereotypical. > >> Slammer worm it would have been messy, but we likely would have missed > > Nobody missed Slammer. (With nobody I don't mean users or the > general public). > >> it happening. What I meant was that had it happened buried in the Slammer mess it would have been under the radar. Keith From spike66 at att.net Sat Jan 22 21:09:25 2011 From: spike66 at att.net (spike) Date: Sat, 22 Jan 2011 13:09:25 -0800 Subject: [ExI] intermittent liar Message-ID: <000901cbba78$a6853260$f38f9720$@att.net> Oh my, I found a most excellent puzzle today. I found an answer, don't know yet if it is right. See what you find: . . . Larry always tells lies during months that begin with vowels but always tells the truth during the other months. During one particular month, Larry makes these two statements: . I lied last month. . I will lie again six months from now. During what month did Larry make these statements? . . . spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Jan 22 21:37:05 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 22 Jan 2011 13:37:05 -0800 Subject: [ExI] intermittent liar In-Reply-To: <000901cbba78$a6853260$f38f9720$@att.net> References: <000901cbba78$a6853260$f38f9720$@att.net> Message-ID: 2011/1/22 spike : > Oh my, I found a most excellent puzzle today.? I found an answer, don?t know > yet if it is right.? See what you find: > > . > > . > > . > > Larry always tells lies during months that begin with vowels but always > tells the truth during the other months.? During one particular month, Larry > makes these two statements: > > > > ????????? I lied last month. > > ????????? I will lie again six months from now. > > > > During what month did Larry make these statements? August. The other 2 months when he lies are 6 months apart, so the second statement would be true if said during April or October (both lying months), or false if said in any truth month except February (6 months before August). That means it has to be February or August; the first statement is false for both of those, and thus must be said in August. From spike66 at att.net Sat Jan 22 22:43:17 2011 From: spike66 at att.net (spike) Date: Sat, 22 Jan 2011 14:43:17 -0800 Subject: [ExI] new and improved intermittent liar Message-ID: <002501cbba85$c315beb0$49413c10$@att.net> ... Adrian Tymes > >> Larry always tells lies during months that begin with vowels but > always tells the truth during the other months.? During one particular > month, Larry makes these two statements: > > > >> ???????? I lied last month. > >> ???????? I will lie again six months from now. > > > > During what month did Larry make these statements? >August. Adrian RIGHT! Now here is my new and (I hope) improved version, don't know if it is correct. Same Larry, same intermittent bad habits, with the following three conditions: Larry utters the comments 1. I told the truth last month 2. I will lie six months from now An all-knowing always-truther hears Larry's statements and comments 3. Larry is lying now. This one should be extremely easy, ja? spike From atymes at gmail.com Sun Jan 23 02:56:48 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 22 Jan 2011 18:56:48 -0800 Subject: [ExI] new and improved intermittent liar In-Reply-To: <002501cbba85$c315beb0$49413c10$@att.net> References: <002501cbba85$c315beb0$49413c10$@att.net> Message-ID: On Sat, Jan 22, 2011 at 2:43 PM, spike wrote: > ... Adrian Tymes >>> Larry always tells lies during months that begin with vowels but >> always tells the truth during the other months.? During one particular >> month, Larry makes these two statements: >> >>> ???????? I lied last month. >>> ???????? I will lie again six months from now. >> >> During what month did Larry make these statements? > >>August. ?Adrian > > RIGHT! ?Now here is my new and (I hope) improved version, don't know if it > is correct. ?Same Larry, same intermittent bad habits, with the following > three conditions: > > Larry utters the comments > > 1. ? ? ?I told the truth last month > 2. ? ? ?I will lie six months from now > > An all-knowing always-truther hears Larry's statements and comments > > 3. ? ? ?Larry is lying now. > > This one should be extremely easy, ja? N/A. Comments 1 and 2 from Larry could only happen together in February. (Simple mod of previous answer.) However, Larry tells the truth in February. Unless by "always-truther" you mean something other than "one who always tells the truth", or some twist like that. From darren.greer3 at gmail.com Sun Jan 23 03:48:36 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 22 Jan 2011 23:48:36 -0400 Subject: [ExI] intermittent liar In-Reply-To: <000901cbba78$a6853260$f38f9720$@att.net> References: <000901cbba78$a6853260$f38f9720$@att.net> Message-ID: Kind of like Epimenides All cretans are liars statement, Spike. You cretan. d. 2011/1/22 spike > Oh my, I found a most excellent puzzle today. I found an answer, don?t > know yet if it is right. See what you find: > > . > > . > > . > > Larry always tells lies during months that begin with vowels but always > tells the truth during the other months. During one particular month, Larry > makes these two statements: > > > > ? I lied last month. > > ? I will lie again six months from now. > > > > During what month did Larry make these statements? > > . > > . > > . > > > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 23 06:29:09 2011 From: spike66 at att.net (spike) Date: Sat, 22 Jan 2011 22:29:09 -0800 Subject: [ExI] new and improved intermittent liar In-Reply-To: References: <002501cbba85$c315beb0$49413c10$@att.net> Message-ID: <001b01cbbac6$d8081390$88183ab0$@att.net> ... On Behalf Of Adrian Tymes ... > >> Larry utters the comments > >> 1. ? ? ?I told the truth last month >> 2. ? ? ?I will lie six months from now > >> An all-knowing always-truther hears Larry's statements and comments > >> 3. ? ? ?Larry is lying now. > >> This one should be extremely easy, ja? >N/A. Comments 1 and 2 from Larry could only happen together in February. >(Simple mod of previous answer.) However, Larry tells the truth in February. >Unless by "always-truther" you mean something other than "one who always tells the truth", or some twist like that. Adrian That wasn't what I was thinking, but I do confess the question is intentionally misleading sorta, so it was a trick. Puzzles do that: they lure the unsuspecting innocent into reading something into the question that is perfectly logical but isn't specifically stated. Clue number 3 really is as it appears, so we already know the answer is in either October, April or August. So the focus is on Larry's lying comments. Ordinarily if one says "a month ago" on 15 October, one means 15 September. But what if one says "a month from now" on 31 October? What day is that? What if one says "six months from now" on October 31st? {8^D spike (oh this is fun.) {8-] From atymes at gmail.com Sun Jan 23 06:57:23 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 22 Jan 2011 22:57:23 -0800 Subject: [ExI] new and improved intermittent liar In-Reply-To: <001b01cbbac6$d8081390$88183ab0$@att.net> References: <002501cbba85$c315beb0$49413c10$@att.net> <001b01cbbac6$d8081390$88183ab0$@att.net> Message-ID: On Sat, Jan 22, 2011 at 10:29 PM, spike wrote: > Ordinarily if one says "a month ago" on 15 October, one means 15 September. > But what if one says "a month from now" on 31 October? ?What day is that? > What if one says "six months from now" on October 31st? Actually, there is a convention for that. If you specify "month", then you are only passing the month-to-month transition the specified number of times. "A month from now" on October 31st is November 30th, since that is the span of one full "month" - to wit, November. "Six months from now" on October 31st is April 30th, for the same reason. It is for this reason that things needing precision, that deal with that sort of timescale, are labeled with "90 days" or the like instead of "3 months". So, no, you can not get out of it by having "one month from now" actually skip to a period in the 2nd month from now. From spike66 at att.net Sun Jan 23 10:00:01 2011 From: spike66 at att.net (spike) Date: Sun, 23 Jan 2011 02:00:01 -0800 Subject: [ExI] new and improved intermittent liar In-Reply-To: References: <002501cbba85$c315beb0$49413c10$@att.net> <001b01cbbac6$d8081390$88183ab0$@att.net> Message-ID: <002701cbbae4$4d21c640$e76552c0$@att.net> ... On Behalf Of Adrian Tymes ... Subject: Re: [ExI] new and improved intermittent liar On Sat, Jan 22, 2011 at 10:29 PM, spike wrote: >> Ordinarily if one says "a month ago" on 15 October, one means 15 September. >> But what if one says "a month from now" on 31 October? ?What day is that? >> What if one says "six months from now" on October 31st? >Actually, there is a convention for that. If you specify "month", then you are only passing the month-to-month transition the specified number of times. >"A month from now" on October 31st is November 30th, since that is the span of one full "month" - to wit, November. >"Six months from now" on October 31st is April 30th, for the same reason... Adrian Ja, good catch. What I need then is this: Larry lies on vowel months and truths on consonant months. His brother Darrel is even more dishonest, truthing only on vowel months and lying on consonant months. Larry comments: I lied a month ago. I will lie again 180 days from now. Darrel comments: Larry is lying now. How can this be? That version kinda gives away what I had in mind and has multiple correct answers, but actually is a better puzzle. It shouldn't be too hard to figure out all the possibilities with this version. spike From stefano.vaj at gmail.com Sun Jan 23 10:49:15 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 23 Jan 2011 11:49:15 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> <007d01cbae9a$39803050$ac8090f0$@att.net> <4D279AFE.7010400@satx.rr.com> <005b01cbaed0$8164d250$842e76f0$@att.net> Message-ID: 2011/1/22 Darren Greer > >I'm always amazed at how few biblethumpers seem to know this.< > > I got over my amazement at the christian proselytizer's inability to > interpret the texts they are quoting from years ago. Being gay, I have made > two arguments based on logic with religious homophobes a hundred times that > they always manage to deke around or entirely ignore. > > What's the big deal about the bias against gay sex? Masturbation or zoophilia or coitus interruptus are equally condemned my monotheistic ethics, and, were it not for the obvious Darwinian disadvantage they suffer when they try to make it an unqualified rule for the entire population, strands prohibiting sex altogether would have become dominant since long. But yet at least in the christian context there is little doubt that absolute chastity represents the very best and what is right at least for the religious ?lite, if not for the unwashed masses. The West has survived and flourished in this respect only thank to doublethink, schizofrenia and the fact that even at the top of christian power many people liked perhaps a few fancy mythological stories, but could not care less about the rest and behaved in a perfectly "pagan" fashion. Or perhaps it is the need to overcome in everyday life such repressive values (which extend far beyond sexuality) which made us stronger than others in the past ("what does not kill thee, etc."). Most New Atheists concentrate too much IMHO on the judeo-christian confusion between the mythical and the empirical realm, and too less on the fact that how Nietzsche has shown well enough one may become a perfect atheist and still remain largely and sometimes unconsciously conditioned by values which remain fundamentally christian. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From algaenymph at gmail.com Sun Jan 23 10:41:56 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Sun, 23 Jan 2011 02:41:56 -0800 Subject: [ExI] Feds Surprise Biotech Industry With Gene Patent Rule Message-ID: <4D3C05F4.4020902@gmail.com> A bit old, but something I think we should talk about: gene patents. This one's specifically about the notorious breast cancer genes that couldn't be researched because of patents. http://www.npr.org/templates/story/story.php?storyId=131046392 And just to be through, I decided to look up the patent holder on LittleSis. As someone with a mild interest in network analysis this site is just the thing I was looking for. :) http://littlesis.org/search?q=myriad+genetics From spike66 at att.net Sun Jan 23 11:46:22 2011 From: spike66 at att.net (spike) Date: Sun, 23 Jan 2011 03:46:22 -0800 Subject: [ExI] new and improved intermittent liar In-Reply-To: <002701cbbae4$4d21c640$e76552c0$@att.net> References: <002501cbba85$c315beb0$49413c10$@att.net> <001b01cbbac6$d8081390$88183ab0$@att.net> <002701cbbae4$4d21c640$e76552c0$@att.net> Message-ID: <003301cbbaf3$28b486d0$7a1d9470$@att.net> ... On Behalf Of spike ... OK I think I have it now: Larry lies on vowel months and truths on consonant months. His brother Darrel is even more dishonest, truthing only on vowel months and lying on consonant months. Larry comments: I lied a month ago. I will lie again 180 days from now. Darrel comments: Larry is lying now and it is not August. How can this be? spike From stefano.vaj at gmail.com Sun Jan 23 12:40:08 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 23 Jan 2011 13:40:08 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <4D39988F.90004@lightlink.com> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <4D39988F.90004@lightlink.com> Message-ID: On 21 January 2011 15:30, Richard Loosemore wrote: > Stefano Vaj wrote: > >> On 20 January 2011 20:27, Richard Loosemore wrote: >> >>> Anders Sandberg wrote: >>> E) Most importantly, the invention of a human-level, self-understanding >>> AGI would not lead to a *subsequent* period (we can call it the >>> "explosion period") in which the invention just sits on a shelf with >>> nobody bothering to pick it up. >>> >> >> Mmhhh. Aren't we already there? A few basic questions: >> >> 1) Computers are vastly inferior to humans in some specific tasks, yet >> vastly superior in others. Why human-like features would be so much >> more crucial in defining the computer "intelligence" than, say, faster >> integer factorisation? >> > > Well, remember that the hypothesis under consideration here is a system > that is capable of redesigning itself. > In principle, a cellular automaton, a Turing machine or a personal computer should be able to design themselves if we can do it ourselves. You just have to feed them the right program and be ready to wait for a long time... > "Human-level" does not mean identical to a human in every respect, it means > smart enough to understand everything that we understand. Mmhhh. Most humans do not "understand" (for any practical mean) anything about the working of any computational device, let alone their own brain. Does it qualify them as non-intelligent? :-/ 2) If the Principle of Computational Equivalence is true, what are we >> really all if not "computers" optimised for, and of course executing, >> different programs? Is AGI ultimately anything else than a very >> complex (and, on contemporary silicon processor, much slower and very >> inefficient) emulation of typical carbo-based units' data processing? >> > > The main idea of building an AGI would be to do it in such a way that we > understood how it worked, and therefore could (almost certainly) think of > ways to improve it. > We are already able to design (or profit from) devices that exhibit intelligence. The real engineering feat would be a Turing-passing system, which in turn probably requires a better reverse-engineering of human ability to pass it by definition. But many non-Turing passing systems may be more powerful and "intelligent", not to mention useful and/or dangerous, in other senses. Also, if we had a working AGI we could do something that we cannot do with > human brains: we could inspect and learn about any aspect of its function > in real time. > Perhaps. Or perhaps we will first be able to do that with biological brains. Who knows? Ultimately, we might even discover that bio or bio-like brains are a decently optimised platform for what they do best, and that silicon really shines in a "co-processor" position, same as GPUs vs CPUs. But of course this would not prevent us from implementing AGIs entirely on silicon, if we accept the performance hit. There are other factors that would add to these. One concerns the AGI's > ability to duplicate itself, after acquiring some knowledge. In the case of > a human, a single, world-leading expert in some field would be nothing more > than one expert. But if an AGI became a world expert, she could then > duplicate herself a thousand times over and work with her sisters as a team > (assuming that the problem under attack would benefit from a big team). > In principle, I do not see any specific reason why duplicating a bio-based brain should be any more impossibile than the same data, features and process on another platform... Lastly, there is the fact that an AGI could communicate with its sisters on > high-bandwidth channels, as I mentioned in my essay. We cannot do that. It > would make a difference. Really can't a fyborg do that? Aren't we already doing that? :-/ A workstation that is used to design the next Intel processor has zero > self-understanding, because it cannot autonomously start and complete a > project to redesign itself. > To form an opinion on the above, I would require a more precise definition of "autonomously", "understanding", "self" etc. In the meantime, I suspect that the difference essentially lies in the execution of different programs, or in the hallucination of supposed "bio-specific" gifts which does not really bear close inspection. The behavioural features and range of simpler animals and the end result of contemporary, ad-hoc, sophisticated computer emulations illustrate well, I believe, this point. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Jan 23 13:56:19 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 23 Jan 2011 14:56:19 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <20110121144018.GI23560@leitl.org> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <20110121144018.GI23560@leitl.org> Message-ID: On 21 January 2011 15:40, Eugen Leitl wrote: > Humans are competitive in the real world. They're reasonably well-rounded > for a reference point. > "Competitive" is a Darwinian reference. A competitive car may be a different concept, and extremely complicate computations happen in nature which need not be competitive in that sense to take place and reproduce themselves. So, a computational device is not going to exhibit anything like that unless we program them, or some routines running on them, to emulate Darwinian features, or an artificial environment where they may evolve them through random mutation and selection. In the latter case, programs or devices may end up being *very* competitive without exhibiting anything vaguely similar to human intelligence or even very sophisticated computations, for that matter (see under gray goo). In fact, mammal- or human-like intelligence is just one possible Darwinian strategy amongst a very large space of possible alternatives. Bacteria, as replicators, are, e.g., at least as competitive as we are. > 2) If the Principle of Computational Equivalence is true, what are we > > really all if not "computers" optimised for, and of course executing, > > different programs? Is AGI ultimately anything else than a very > > Can I check out your source? No, not the genome, the actual data > in your head. > Be my guest, just remember to return my brain at the end when you have finished disassembling its machine code... :-) Seriously, I do not believe that we have to resort to very-low level emulation of bio brains to emulate one or another of their features, but once such emulation is satisfactory enough what we have actually performed a black-box is reeeingeneering of its relevant programs, so that the only practical difference with the original coding is probably copyright... > complex (and, on contemporary silicon processor, much slower and very > > inefficient) emulation of typical carbo-based units' data processing? > > Inefficient in terms of energy, initially, yes. But if you can spare > a few 10 MW you can do very interesting things with today's primitive > technology. Any prototype is inefficient, but these tend to ramp up > extremely quickly. In terms of physics of computation we people are > extremely inefficient. We only look good because the next-worst spot > is so much worse. But we're fixed, while our systems are improving > quite nicely. > I am not referring to energy efficiencies, but simply to the kind of efficiency where a human brain or a cellular automaton are very slow in calculating the square roots of large integers, and contemporary computers are quite slow at accurate, fine-grained pattern recognition. And I do not really see why we would be "fixed". Of course, there are a number of computations which are likely to be performed more efficiently on different hardware. You have just to add or move or replace the relevant portions, as in any system. E.g., distributed computing projects have dramatic bandwith bottleneck problems in comparison with traditional HPC. Does it prevent them from attacking just the same problems? No. But take a Chinese Room with sufficient time and memory, you can have it emulate anything at arbitrary degrees of accuracy. And this is of course true also for silicon-based processors and any other device passing the very low threshold of universal computation. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 23 16:26:28 2011 From: spike66 at att.net (spike) Date: Sun, 23 Jan 2011 08:26:28 -0800 Subject: [ExI] new and improved intermittent liar In-Reply-To: <003301cbbaf3$28b486d0$7a1d9470$@att.net> References: <002501cbba85$c315beb0$49413c10$@att.net> <001b01cbbac6$d8081390$88183ab0$@att.net> <002701cbbae4$4d21c640$e76552c0$@att.net> <003301cbbaf3$28b486d0$7a1d9470$@att.net> Message-ID: <004c01cbbb1a$49ba96e0$dd2fc4a0$@att.net> ... On Behalf Of spike ... OK I think I have a version of this puzzle I like: Larry lies on vowel months and truths on consonant months. His brother Darrel is even more dishonest, truthing only on vowel months and lying on consonant months. Larry comments: It is October. I lied a month ago. I will lie again 180 days from now. Darrel comments: Larry is lying now. It is not August. What month is it? With this I think I have a unique answer. spike From thespike at satx.rr.com Sun Jan 23 18:29:07 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 23 Jan 2011 12:29:07 -0600 Subject: [ExI] Psi in a major science journal, J. Personality and Social Psychology. In-Reply-To: References: <4CBDC0A9.8030203@satx.rr.com> <24C7E438-38DB-4CF6-9FEC-2903715EC73A@bellsouth.net> <4CBE8D1D.4040209@satx.rr.com> <32B614F4-3F8F-491B-A647-8D703476290C@bellsouth.net> <4CBF48C1.5090504@satx.rr.com> <4CBF7B51.6080600@satx.rr.com> Message-ID: <4D3C7373.203@satx.rr.com> On 10/21/2010 3:23 PM, John Clark wrote: > An article describing a repetition of the experiment and a confirmation > of Bem's results published in Science, Nature or Physical Review Letters > would satisfy me. Otherwise it's just some guy who typed some stuff. On 10/22/2010 12:35 AM, John Clark wrote: > Bem is an unknown person and his work was refereed by more unknowns, so > If I read Bem's paper I will not know the methods or findings of his > experiment, I won't even know if there was an experiment, all I'll know > is that he typed some stuff. No replication published there yet, but there's an article by Greg Miller (Science 21 Jan) on this unknown person's typing. Presumably Miller suspects there might really have been an experiment (nine experiments, in fact). Damien Broderick From scerir at alice.it Sun Jan 23 19:04:02 2011 From: scerir at alice.it (scerir) Date: Sun, 23 Jan 2011 20:04:02 +0100 Subject: [ExI] 'The Immortalization Commission: Science and the Strange Quest to Cheat Death' In-Reply-To: <4D3C7373.203@satx.rr.com> References: <4CBDC0A9.8030203@satx.rr.com> <24C7E438-38DB-4CF6-9FEC-2903715EC73A@bellsouth.net> <4CBE8D1D.4040209@satx.rr.com> <32B614F4-3F8F-491B-A647-8D703476290C@bellsouth.net> <4CBF48C1.5090504@satx.rr.com> <4CBF7B51.6080600@satx.rr.com> <4D3C7373.203@satx.rr.com> Message-ID: <8645404F14634A8DA90CF45542406831@PCserafino> http://www.ft.com/cms/s/2/02d318c8-24e8-11e0-895d-00144feab49a.html 'The Immortalization Commission: Science and the Strange Quest to Cheat Death', by John Gray, Allen Lane, RRP ?18.99, 288 pages In the review, by Stephen Cave (who is writing a book about immortality for Random House), one can also read .... 'The first part of the book, however, is not dedicated to the techno-utopians but to those whom Gray considers their intellectual predecessors: the worthy figures who together led the Society for Psychical Research, the London-based learned society founded in 1882 to examine the paranormal. Its members included the cream of Edwardian society, such as the Cambridge philosophy professor Henry Sidgwick, the co-discoverer of evolution Alfred Russel Wallace, prime minister Arthur Balfour and the poet WB Yeats.' From atymes at gmail.com Sun Jan 23 19:57:05 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 23 Jan 2011 11:57:05 -0800 Subject: [ExI] new and improved intermittent liar In-Reply-To: <002701cbbae4$4d21c640$e76552c0$@att.net> References: <002501cbba85$c315beb0$49413c10$@att.net> <001b01cbbac6$d8081390$88183ab0$@att.net> <002701cbbae4$4d21c640$e76552c0$@att.net> Message-ID: On Sun, Jan 23, 2011 at 2:00 AM, spike wrote: > ... On Behalf Of Adrian Tymes > ... > Subject: Re: [ExI] new and improved intermittent liar > > On Sat, Jan 22, 2011 at 10:29 PM, spike wrote: >>> Ordinarily if one says "a month ago" on 15 October, one means 15 > September. >>> But what if one says "a month from now" on 31 October? ?What day is that? >>> What if one says "six months from now" on October 31st? > >>Actually, there is a convention for that. ?If you specify "month", then you > are only passing the month-to-month transition the specified number of > times. > >>"A month from now" on October 31st is November 30th, since that is the span > of one full "month" - to wit, November. > >>"Six months from now" on October 31st is April 30th, for the same reason... > Adrian > > > Ja, good catch. ?What I need then is this: > > Larry lies on vowel months and truths on consonant months. ?His brother > Darrel is even more dishonest, truthing only on vowel months and lying on > consonant months. > > Larry comments: ?I lied a month ago. ?I will lie again 180 days from now. > > Darrel comments: ?Larry is lying now. > > How can this be? > > > > That version kinda gives away what I had in mind and has multiple correct > answers, but actually is a better puzzle. ?It shouldn't be too hard to > figure out all the possibilities with this version. Darrel's comment is irrelevant. He will always say that Larry is lying at the present time. At any given time, either he's lying when Larry isn't, or he isn't when Larry is. Larry's comments are both true on, say, May 1st-4th. A month ago was April, and 180 days hence is late October. I haven't checked, but there may be a few other days where both could be uttered. From sjatkins at mac.com Sun Jan 23 21:25:39 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 23 Jan 2011 13:25:39 -0800 Subject: [ExI] Yes, the Singularity is the greatest threat to humanity In-Reply-To: References: <4D338A8B.2030603@aleph.se> <20110117123448.GH23560@leitl.org> <20110118072329.GK23560@leitl.org> Message-ID: <1029C685-7F8E-4C77-B4AD-87066E1994C0@mac.com> On Jan 21, 2011, at 9:29 PM, Michael Anissimov wrote: > On Mon, Jan 17, 2011 at 11:23 PM, Eugen Leitl wrote: > On Mon, Jan 17, 2011 at 03:11:30PM -0800, Michael Anissimov wrote: > > > This is the basis of Eugen's opposition to Friendly AI -- he sees it as a > > This is not the basis. This is one of the many ways I'm pointing > out what you're trying to do is undefined. Trying to implement > something undefined is going to produce an undefined outcome. > > That's a classical case being not even wrong. > > Defining it is our goal... put yourself in my shoes, imagine that you think that uploading is much, much harder than intelligence-explosion-initiating AGI. What to do? What, what, what to do? Both are currently unknowably hard in that we don't know how to achieve either. Guesses that one is harder than the other may be right or wrong but are not decidable as to their correctness. Friendly AI requires not only AGI but also a provably correct way of constraining it so we get only things that we consider beneficial or at the least not destructive of all our values and of humanity itself. So the first problem is whether we have a very clear idea of what we consider truly beneficial even now much less into the future. For whatever we think that is should by FAI theory as I understand it be unbreakably encoded to AGI design in such a way that it is immutable no matter how much the AGI self-improves. This is a type of Aladdin's Lamp program. You get one chance to get that 'wish' right without unforeseen consequences. We can't clearly define what "Friendly" means and we don't know how to safely codify it to immutably direct a being of ever increasing ability throughout whatever the future may bring. The next problem is that I see no reason to believe that it is remotely possible to make some one part of the AGI code immutable when the rest is open to examination and change. The entire effort seems to be driven by fear. It borders on "don't create any AGI until we can absolutely prove it is safe". In short, it is the Precautionary Principle at work. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jan 23 21:38:05 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 23 Jan 2011 13:38:05 -0800 Subject: [ExI] intermittent liar In-Reply-To: <000901cbba78$a6853260$f38f9720$@att.net> References: <000901cbba78$a6853260$f38f9720$@att.net> Message-ID: <7A518665-A029-4BF2-8445-DF13D6A16CC9@mac.com> On Jan 22, 2011, at 1:09 PM, spike wrote: > Oh my, I found a most excellent puzzle today. I found an answer, don?t know yet if it is right. See what you find: > . > . > . > Larry always tells lies during months that begin with vowels but always tells the truth during the other months. During one particular month, Larry makes these two statements: > > ? I lied last month. > ? I will lie again six months from now. > > During what month did Larry make these statements? Six months from the month after the vowel month is supposed to be another vowel month? Not possible. But two vowel months are six months from another, April and October. So I presume Larry is lying above. The first statement would be a lie anytime the preceding month was a non-vowel month. That he is lying constrains the current month to a vowel month. Since he cannot be really lying in six months, October and April are eliminated leaving August. > - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jan 23 21:42:57 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 23 Jan 2011 13:42:57 -0800 Subject: [ExI] 'The Immortalization Commission: Science and the Strange Quest to Cheat Death' In-Reply-To: <8645404F14634A8DA90CF45542406831@PCserafino> References: <4CBDC0A9.8030203@satx.rr.com> <24C7E438-38DB-4CF6-9FEC-2903715EC73A@bellsouth.net> <4CBE8D1D.4040209@satx.rr.com> <32B614F4-3F8F-491B-A647-8D703476290C@bellsouth.net> <4CBF48C1.5090504@satx.rr.com> <4CBF7B51.6080600@satx.rr.com> <4D3C7373.203@satx.rr.com> <8645404F14634A8DA90CF45542406831@PCserafino> Message-ID: <8C59216B-EC8F-44F9-B0D5-437CAD79117F@mac.com> On Jan 23, 2011, at 11:04 AM, scerir wrote: > http://www.ft.com/cms/s/2/02d318c8-24e8-11e0-895d-00144feab49a.html > > 'The Immortalization Commission: Science and the Strange Quest to Cheat Death', > by John Gray, Allen Lane, RRP ?18.99, 288 pages > > In the review, by Stephen Cave (who is writing a book about immortality for Random House), > one can also read .... > > 'The first part of the book, however, is not dedicated to the techno-utopians but to those whom Gray considers their intellectual predecessors: the worthy figures who together led the Society for Psychical Research, the London-based learned society founded in 1882 to examine the paranormal. Its members included the cream of Edwardian society, such as the Cambridge philosophy professor Henry Sidgwick, the co-discoverer of evolution Alfred Russel Wallace, prime minister Arthur Balfour and the poet WB Yeats.' > Are we really in such blighted times that something as obviously desirable as ending aging is called "utopian" and compared to taking all aspects of the "paranormal" seriously? - s From atymes at gmail.com Sun Jan 23 21:45:10 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 23 Jan 2011 13:45:10 -0800 Subject: [ExI] 'The Immortalization Commission: Science and the Strange Quest to Cheat Death' In-Reply-To: <8C59216B-EC8F-44F9-B0D5-437CAD79117F@mac.com> References: <4CBDC0A9.8030203@satx.rr.com> <24C7E438-38DB-4CF6-9FEC-2903715EC73A@bellsouth.net> <4CBE8D1D.4040209@satx.rr.com> <32B614F4-3F8F-491B-A647-8D703476290C@bellsouth.net> <4CBF48C1.5090504@satx.rr.com> <4CBF7B51.6080600@satx.rr.com> <4D3C7373.203@satx.rr.com> <8645404F14634A8DA90CF45542406831@PCserafino> <8C59216B-EC8F-44F9-B0D5-437CAD79117F@mac.com> Message-ID: On Sun, Jan 23, 2011 at 1:42 PM, Samantha Atkins wrote: > Are we really in such blighted times that something as obviously desirable as ending aging is called "utopian" and compared to taking all aspects of the "paranormal" seriously? Yes. This surprises you? From darren.greer3 at gmail.com Sun Jan 23 22:32:34 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sun, 23 Jan 2011 18:32:34 -0400 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> <007d01cbae9a$39803050$ac8090f0$@att.net> <4D279AFE.7010400@satx.rr.com> <005b01cbaed0$8164d250$842e76f0$@att.net> Message-ID: >What's the big deal about the bias against gay sex? Masturbation or zoophilia or coitus interruptus are equally condemned my monotheistic ethics, There's obvious differences. The bias is not simply against gay sex but gay relationships in general. Many gay men and women view the emotional relationship to be under attack rather than just the sex. And many religious homophobes, as I stated in my original post, ignore other biblical strictures in favor of promoting, often with a viscous zeal, the gay one. I suspect it's a double whammy -- a case of evolutionary revulsion coupled with religious bias, making their objections particularly strenuous. >Nietzsche has shown well enough one may become a perfect atheist and still remain largely and sometimes unconsciously conditioned by values which remain fundamentally christian. True. But he took it even further, going as far as to say that all human truth is a moral semblance. (Beyond Good and Evil) Not all monotheistic ethics need to be disregarded, just as I'm not sure Nietzsche's approach to ethics and morality is the best way to go either. I've always been partial to the Aristotelian middle way, hovering dead centre between emotionalism and rationalism. Darren 2011/1/23 Stefano Vaj > 2011/1/22 Darren Greer > > >I'm always amazed at how few biblethumpers seem to know this.< >> >> I got over my amazement at the christian proselytizer's inability to >> interpret the texts they are quoting from years ago. Being gay, I have made >> two arguments based on logic with religious homophobes a hundred times that >> they always manage to deke around or entirely ignore. >> >> > What's the big deal about the bias against gay sex? Masturbation or > zoophilia or coitus interruptus are equally condemned my monotheistic > ethics, and, were it not for the obvious Darwinian disadvantage they suffer > when they try to make it an unqualified rule for the entire population, > strands prohibiting sex altogether would have become dominant since long. > > But yet at least in the christian context there is little doubt that > absolute chastity represents the very best and what is right at least for > the religious ?lite, if not for the unwashed masses. > > The West has survived and flourished in this respect only thank to > doublethink, schizofrenia and the fact that even at the top of christian > power many people liked perhaps a few fancy mythological stories, but could > not care less about the rest and behaved in a perfectly "pagan" fashion. > > Or perhaps it is the need to overcome in everyday life such repressive > values (which extend far beyond sexuality) which made us stronger than > others in the past ("what does not kill thee, etc."). > > Most New Atheists concentrate too much IMHO on the judeo-christian > confusion between the mythical and the empirical realm, and too less on the > fact that how Nietzsche has shown well enough one may become a perfect > atheist and still remain largely and sometimes unconsciously conditioned by > values which remain fundamentally christian. > > -- > Stefano Vaj > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jan 24 06:26:35 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 23 Jan 2011 22:26:35 -0800 Subject: [ExI] Intermittent liar puzzle #3 Message-ID: Since I've answered the others, figured I might as well contribute one of my own. Jamal's religion requires him to lie to outsiders on one specific day of each year, and tell the truth to outsiders at all other times. You are an outsider, and you do not know what his schedule is, but you do know that the day does not change from year to year. You know that, for example, if he lies to you on any January 1st, he will lie to you on every January 1st, although you have already determined that January 1st is not his lying day. That is, you did not know his schedule until just now, when he told you that he will lie 365 days from now. What day is it now, and what day will he lie? From rebelwithaclue at gmail.com Mon Jan 24 07:16:09 2011 From: rebelwithaclue at gmail.com (rebelwithaclue at gmail.com) Date: Mon, 24 Jan 2011 00:16:09 -0700 Subject: [ExI] Intermittent liar puzzle #3 In-Reply-To: References: Message-ID: Today is Jan 2nd and He lies on Jan 2nd. This takes place in a leap year, so 365 days from now is Jan 1st, and we already know Jan 1st isn't his lying day. On Sun, Jan 23, 2011 at 11:26 PM, Adrian Tymes wrote: > Since I've answered the others, figured I might as > well contribute one of my own. > > Jamal's religion requires him to lie to outsiders on > one specific day of each year, and tell the truth to > outsiders at all other times. You are an outsider, > and you do not know what his schedule is, but you > do know that the day does not change from year to > year. You know that, for example, if he lies to you > on any January 1st, he will lie to you on every > January 1st, although you have already determined > that January 1st is not his lying day. > > That is, you did not know his schedule until just > now, when he told you that he will lie 365 days from > now. What day is it now, and what day will he lie? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Mon Jan 24 07:33:38 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 23 Jan 2011 23:33:38 -0800 (PST) Subject: [ExI] Intermittent liar puzzle #3 and #4 In-Reply-To: References: Message-ID: <222900.69574.qm@web65607.mail.ac4.yahoo.com> ----- Original Message ---- > From: Adrian Tymes > To: ExI chat list > Sent: Sun, January 23, 2011 10:26:35 PM > Subject: [ExI] Intermittent liar puzzle #3 > > Since I've answered the others, figured I might as > well contribute one of my own. > > Jamal's religion requires him to lie to outsiders on > one specific day of each year, and tell the truth to > outsiders at all other times.? You are an outsider, > and you do not know what his schedule is, but you > do know that the day does not change from year to > year.? You know that, for example, if he lies to you > on any January 1st, he will lie to you on every > January 1st, although you have already determined > that January 1st is not his lying day. > > That is, you did not know his schedule until just > now, when he told you that he will lie 365 days from > now.? What day is it now, and what day will he lie? It is now February 29th of a leap year and Jamal lies every February 28th. Now here's one?another one for you guys: Fred?shows you?two boxes- one that is?red and one that is blue - and you may choose to take either. You know one of the boxes contains an explosive boobytrap and the other contains a million dollars cash but you don't know which box is which. You also know that Fred?always lies?or always tells the truth on alternate days. You are allowed to ask Fred only one question.?How do you get the million dollars? Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower???? From stefano.vaj at gmail.com Mon Jan 24 11:20:45 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 24 Jan 2011 12:20:45 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: <005401cbac29$a062d7a0$e12886e0$@att.net> <007901cbac3d$bcf6e230$36e4a690$@att.net> <001201cbac56$ea0ce670$be26b350$@att.net> <89ED4D16-BD84-4704-A805-6AA0819412D3@bellsouth.net> <93F0B00A345048008AA17AF1E4E3641D@DFC68LF1> <007d01cbae9a$39803050$ac8090f0$@att.net> <4D279AFE.7010400@satx.rr.com> <005b01cbaed0$8164d250$842e76f0$@att.net> Message-ID: 2011/1/23 Darren Greer > There's obvious differences. The bias is not simply against gay sex but gay > relationships in general. Many gay men and women view the emotional > relationship to be under attack rather than just the sex. And many religious > homophobes, as I stated in my original post, ignore other biblical > strictures in favor of promoting, often with a viscous zeal, the gay one. > Mmhhh, This in fact has no real theological founding - at least for a Catholic. In fact, perfectly chaste man-to-man love would be a *superior*, more spiritual, form of love in the Platonic sense, given that it has nothing to do with reproduction, sexuality and all those horrible "natural" and beastly things. In this respect, the diffidence for all kinds of pleasures which are not strictly connected with, and limited to, what is required for reproduction of the human society (scarce nutrition, frigid and embarassed monogamic sex...) is being probably profited from by people who are in fact just homophobe and do not share in the least the horror for sex as such. > True. But he took it even further, going as far as to say that all human > truth is a moral semblance. (Beyond Good and Evil) Not all monotheistic > ethics need to be disregarded, just as I'm not sure Nietzsche's approach to > ethics and morality is the best way to go either. > Sure. Nietzsche in fact admits that such leanings in ethical issues, such as the taste for suffering and punishment, are just "natural" for some of us. Personally, I just maintain that most of its tenets, even when presented in an entirely secularised context, are really at odd with "deep transhumanism"... ;) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Mon Jan 24 15:39:35 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 24 Jan 2011 07:39:35 -0800 (PST) Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: Message-ID: <369405.14001.qm@web114404.mail.gq1.yahoo.com> Stefano Vaj asked: >> Lastly, there is the fact that an AGI could communicate with its sisters on >> high-bandwidth channels, as I mentioned in my essay. We cannot do that. It >> would make a difference. > Really can't a fyborg do that? Aren't we already doing that? :-/ Absolutely not! When was the last time a class full of students downloaded, in a few seconds, all the knowledge and skills they need to read and interpret an NMR scan, or a bunch of crystallography data? When was the last time you gave someone the experience of your last holiday? (not a third-hand account, but the actual experience) If you ever needed to fly a helicopter in an emergency, would you be able to download the skill in a few seconds? We are not already doing that. Nor can we (afaik), as long as our brains remain biological, and certainly not without some radical tinkering. AIs would be able to do those things easily, and that's just scratching the surface in a rather unimaginative, human-centric, way. Yes, you can instruct a computer to do download data about all those things, but how will that help you? Suppose your computer has limbs able to fly the helicopter, or operate the lab equipment. You'd then have to program it to use that data to perform the tasks you want, and you'd still not understand what you were doing. And that holiday would remain strictly third-hand. I don't want to go so far as to say that the fyborg idea is no use at all, but then neither are bows and arrows, or candles. Ben Zaiboc From spike66 at att.net Mon Jan 24 16:30:01 2011 From: spike66 at att.net (spike) Date: Mon, 24 Jan 2011 08:30:01 -0800 Subject: [ExI] Intermittent liar puzzle #3 and #4 In-Reply-To: <222900.69574.qm@web65607.mail.ac4.yahoo.com> References: <222900.69574.qm@web65607.mail.ac4.yahoo.com> Message-ID: <003401cbbbe3$f2fe3f30$d8fabd90$@att.net> ... On Behalf Of The Avantguardian ... Now here's one?another one for you guys: >Fred?shows you?two boxes- one that is?red and one that is blue - and you may choose to take either. You know one of the boxes contains an explosive boobytrap and the other contains a million dollars cash but you don't know which box is which. You also know that Fred?always lies?or always tells the truth on alternate days. You are allowed to ask Fred only one question.?How do you get the million dollars? Stuart LaForge "Fred, if I had asked you yesterday 'which box has the bucks?' which would you have told me?" The box he indicates will be the bomb, so you take the other one. {8-] spike From kellycoinguy at gmail.com Mon Jan 24 15:59:16 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 24 Jan 2011 08:59:16 -0700 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> References: <485988.6615.qm@web114416.mail.gq1.yahoo.com> <2EB06F06-1974-4D95-8BCD-89382F028C01@mac.com> Message-ID: On Thu, Jan 20, 2011 at 3:26 AM, Samantha Atkins wrote: > On Jan 12, 2011, at 12:28 PM, Ben Zaiboc wrote: > >> "spike" wrote: >> >>> I ask you then: suppose I personally knew a way to write >>> something >>> inspirational. ?I know an inspiring story based on >>> something that actually >>> happened, which I could fictionalize to protect the >>> identities, and it >>> involves one who came thru a very trying time by faith in >>> god. ?It really is >>> a good story. ?But you know and I know I am a flaming >>> atheist now. ?I could >>> use a pseudonym. ?Is it ethical for me to write >>> it? ?Would I be lying in a >>> sense? ?I have been struggling with this question for >>> years, and I am asking >>> for advice here. ?Johnny? ?Adrian? >>> Ben? ?Damien? ?Keith? ?Others? >> >> Of course you wouldn't be lying, not if you know it's a true story. >> As for whether you *should* write it, that's another thing. ?There are pros and cons. ?One of the cons is providing fuel for the god-squad. Hi, I'm new here, as well as being fairly new to atheistic beliefs. I think the question boils down to some extent as to what sort of atheist you are. The more militant atheists that proselytize their belief system to others would likely have a problem with your telling "faith promoting" stories. Those who are more laissez-faire should not have a problem with it. So, what is your personal goal? Do you want to see a world full of enlightened atheists? Or a world full of mind numbed religious zombies? Choose the kind of world you want to create, then take steps in that direction. Or decide that you don't care what others believe, and then whatever you do is OK. -Kelly From atymes at gmail.com Mon Jan 24 17:02:27 2011 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 24 Jan 2011 09:02:27 -0800 Subject: [ExI] Intermittent liar puzzle #3 and #4 In-Reply-To: <003401cbbbe3$f2fe3f30$d8fabd90$@att.net> References: <222900.69574.qm@web65607.mail.ac4.yahoo.com> <003401cbbbe3$f2fe3f30$d8fabd90$@att.net> Message-ID: On Mon, Jan 24, 2011 at 8:30 AM, spike wrote: > ... On Behalf Of The Avantguardian > ... > Now here's one?another one for you guys: > >>Fred?shows you?two boxes- one that is?red and one that is blue - and you > may choose to take either. You know one of the boxes contains an explosive > boobytrap and the other contains a million dollars cash but you don't know > which box is which. You also know that Fred?always lies?or always tells the > truth on alternate days. You are allowed to ask Fred only one question.?How > do you get the million dollars? ?Stuart LaForge > > "Fred, if I had asked you yesterday 'which box has the bucks?' which would > you have told me?" > > The box he indicates will be the bomb, so you take the other one. Assuming he hasn't switched the boxes' content since yesterday. From atymes at gmail.com Mon Jan 24 16:58:35 2011 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 24 Jan 2011 08:58:35 -0800 Subject: [ExI] Intermittent liar puzzle #3 In-Reply-To: References: Message-ID: 2011/1/23 : > Today is Jan 2nd and He lies on Jan 2nd. > This takes place in a leap year, so 365 days from now is Jan 1st, and we > already know Jan 1st isn't his lying day. Right. Actually, almost any day on a leap year would suffice - I was specifically thinking February 29th. From dan_ust at yahoo.com Mon Jan 24 17:19:35 2011 From: dan_ust at yahoo.com (Dan) Date: Mon, 24 Jan 2011 09:19:35 -0800 (PST) Subject: [ExI] Slime mold farms bacteria Message-ID: <761045.68100.qm@web30107.mail.mud.yahoo.com> http://arstechnica.com/science/news/2011/01/slime-mold-macdonald-farms-its-bacterial-meals.ars Not bad for a former fungus! Regards, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 24 18:02:27 2011 From: spike66 at att.net (spike) Date: Mon, 24 Jan 2011 10:02:27 -0800 Subject: [ExI] Slime mold farms bacteria In-Reply-To: <761045.68100.qm@web30107.mail.mud.yahoo.com> References: <761045.68100.qm@web30107.mail.mud.yahoo.com> Message-ID: <005e01cbbbf0$dca531f0$95ef95d0$@att.net> . On Behalf Of Dan . Subject: [ExI] Slime mold farms bacteria http://arstechnica.com/science/news/2011/01/slime-mold-macdonald-farms-its-b acterial-meals.ars Not bad for a former fungus! Regards, Dan Now THAT is cool. Thanks Dan! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Mon Jan 24 18:27:32 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 24 Jan 2011 13:27:32 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <4D39988F.90004@lightlink.com> Message-ID: <4D3DC494.1040001@lightlink.com> Stefano Vaj wrote: > On 21 January 2011 15:30, Richard Loosemore > wrote: > > Well, remember that the hypothesis under consideration here is a > system that is capable of redesigning itself. > > > In principle, a cellular automaton, a Turing machine or a personal > computer should be able to design themselves if we can do it ourselves. > You just have to feed them the right program and be ready to wait for a > long time... This is meaningless in the present context, surely? Lots of things are capable of designing themselves in principle. I don't give a fig if some cellular automaton might do in the next 10 gigayears, I am only considering the question of intelligence explosions happening as a result of building AGI systems. > "Human-level" does not mean identical to a human in every respect, > it means smart enough to understand everything that we understand. > > > Mmhhh. Most humans do not "understand" (for any practical mean) anything > about the working of any computational device, let alone their own > brain. Does it qualify them as non-intelligent? :-/ Well, people who deliberately play semantic tricks with my sentences, THEM I am not so sure about ..... ;-) > The main idea of building an AGI would be to do it in such a way > that we understood how it worked, and therefore could (almost > certainly) think of ways to improve it. > > > We are already able to design (or profit from) devices that exhibit > intelligence. The real engineering feat would be a Turing-passing > system, which in turn probably requires a better reverse-engineering of > human ability to pass it by definition. But many non-Turing passing > systems may be more powerful and "intelligent", not to mention useful > and/or dangerous, in other senses. So..... ? Does not relate to the point! Richard Loosemore From rebelwithaclue at gmail.com Mon Jan 24 18:54:45 2011 From: rebelwithaclue at gmail.com (rebelwithaclue at gmail.com) Date: Mon, 24 Jan 2011 11:54:45 -0700 Subject: [ExI] Intermittent liar puzzle #3 In-Reply-To: References: Message-ID: But if it's any other day of the year, today may be his lying day, and 365 days later he may actually be truthful. On Mon, Jan 24, 2011 at 9:58 AM, Adrian Tymes wrote: > 2011/1/23 : > > Today is Jan 2nd and He lies on Jan 2nd. > > This takes place in a leap year, so 365 days from now is Jan 1st, and we > > already know Jan 1st isn't his lying day. > > Right. Actually, almost any day on a leap year would > suffice - I was specifically thinking February 29th. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Jan 25 03:43:29 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 24 Jan 2011 22:43:29 -0500 Subject: [ExI] Slime mold farms bacteria In-Reply-To: <761045.68100.qm@web30107.mail.mud.yahoo.com> References: <761045.68100.qm@web30107.mail.mud.yahoo.com> Message-ID: 2011/1/24 Dan : > http://arstechnica.com/science/news/2011/01/slime-mold-macdonald-farms-its-bacterial-meals.ars > > Not bad for a former fungus! "It's possible to conceive of a system in which Dicty could evolve a way to farm a single species, much as leaf cutter ants work with only a single species of fungus, which might provide a more clear-cut advantage. " leaf-cutter ants... clear-cut advantage Do you think that was intentional or are some puns so bad that they emerge spontaneously as groaners? On the actual article though: does this suggest that "balanced selection" may have some humans responding positively to agricultural diets while others are better served by paleo ? Maybe we're not all meant to eat the same stuff. From avantguardian2020 at yahoo.com Tue Jan 25 05:51:15 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 24 Jan 2011 21:51:15 -0800 (PST) Subject: [ExI] Intermittent liar puzzle #3 and #4 In-Reply-To: <003401cbbbe3$f2fe3f30$d8fabd90$@att.net> References: <222900.69574.qm@web65607.mail.ac4.yahoo.com> <003401cbbbe3$f2fe3f30$d8fabd90$@att.net> Message-ID: <397690.29245.qm@web65610.mail.ac4.yahoo.com> ----- Original Message ---- > From: spike > To: ExI chat list > Sent: Mon, January 24, 2011 8:30:01 AM > Subject: Re: [ExI] Intermittent liar puzzle #3 and #4 > > > ... On Behalf Of The Avantguardian > ... > Now here's one?another one for you guys: > > >Fred?shows you?two boxes- one that is?red and one that is blue - and you > may choose to take either. You know one of the boxes contains an explosive > boobytrap and the other contains a million dollars cash but you don't know > which box is which. You also know that Fred?always lies?or always tells the > truth on alternate days. You are allowed to ask Fred only one question.?How > do you get the million dollars?? Stuart LaForge > > > > "Fred, if I had asked you yesterday 'which box has the bucks?' which would > you have told me?" > > The box he indicates will be the bomb, so you take the other one. Indeed. Well done, Spike. :-) Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From spike66 at att.net Tue Jan 25 06:26:07 2011 From: spike66 at att.net (spike) Date: Mon, 24 Jan 2011 22:26:07 -0800 Subject: [ExI] Intermittent liar puzzle #3 and #4 In-Reply-To: <397690.29245.qm@web65610.mail.ac4.yahoo.com> References: <222900.69574.qm@web65607.mail.ac4.yahoo.com> <003401cbbbe3$f2fe3f30$d8fabd90$@att.net> <397690.29245.qm@web65610.mail.ac4.yahoo.com> Message-ID: <002801cbbc58$c24707d0$46d51770$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of The Avantguardian ... Indeed. Well done, Spike. :-) Stuart LaForge Thanks. Regarding my previous puzzle, unless I made a mistake in reasoning (quite possible) the only times those statements work is on 1 thru 4 April. As Adrian pointed out, there is no need for Darrel to say that Larry is lying. Both will always say that of each other throughout the year. So the puzzle reduces to: Larry truths in consonant months only. Darrel truths in vowel months only. They utter the following: Larry: It is August. Last month I lied. 180 days hence I will lie again. Darrel: It is not October. When is it? I am thinking of submitting it to the weekly puzzler on Click and Clack the Tappet Brothers. Have we any other Car Talk fans here? spike From bbenzai at yahoo.com Tue Jan 25 14:16:48 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 25 Jan 2011 06:16:48 -0800 (PST) Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: Message-ID: <131505.6323.qm@web114402.mail.gq1.yahoo.com> Kelly Anderson wrote: > Hi, I'm new here, as well as being fairly new to atheistic > beliefs. Hi, Kelly, and welcome to the list. Just out of interest, what are these 'atheistic beliefs' of which you speak? Atheism means a *lack* of belief in the 'strong' sense of the word, in other words, the sense used by religious people, where the belief doesn't have any supporting evidence. I still believe the sun will rise tomorrow though, which is the 'weak' sense of the word, and is evidence-based. I don't think this can be called an 'atheistic belief' though. Many religious types try to (mis)characterise atheism as a Belief (strong sense) that there are no gods, but this is not true. That would be called anti-theism, I suppose, and would require just as much 'blind faith' as any religion. Well, nearly as much. I'm not talking about an assumption here, but a Belief. Atheism makes a reasonable assumption, based on the available evidence (both the logical absurdities and the lack of physical evidence). That's not a Belief. I think this is a very important point, and atheists should watch out for this misconception wherever it occurs (including in themselves), and try to correct it. Absence of belief != Belief of absence. Ben Zaiboc From atymes at gmail.com Tue Jan 25 17:04:54 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 25 Jan 2011 09:04:54 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <131505.6323.qm@web114402.mail.gq1.yahoo.com> References: <131505.6323.qm@web114402.mail.gq1.yahoo.com> Message-ID: Technically, what you speak of is agnosticism. Agnostic = absence of belief Atheist = belief of absence That's the difference between the terms. On Tue, Jan 25, 2011 at 6:16 AM, Ben Zaiboc wrote: > Atheism means a *lack* of belief in the 'strong' sense of the word, in other words, the sense used by religious people, where the belief doesn't have any supporting evidence. ?I still believe the sun will rise tomorrow though, which is the 'weak' sense of the word, and is evidence-based. ?I don't think this can be called an 'atheistic belief' though. > > Many religious types try to (mis)characterise atheism as a Belief (strong sense) that there are no gods, but this is not true. ?That would be called anti-theism, I suppose, and would require just as much 'blind faith' as any religion. Well, nearly as much. I'm not talking about an assumption here, but a Belief. > > Atheism makes a reasonable assumption, based on the available evidence (both the logical absurdities and the lack of physical evidence). ?That's not a Belief. > > I think this is a very important point, and atheists should watch out for this misconception wherever it occurs (including in themselves), and try to correct it. > > Absence of belief != Belief of absence. > > Ben Zaiboc > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From amon at doctrinezero.com Tue Jan 25 17:13:23 2011 From: amon at doctrinezero.com (Amon Zero) Date: Tue, 25 Jan 2011 17:13:23 +0000 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <131505.6323.qm@web114402.mail.gq1.yahoo.com> Message-ID: On 25 January 2011 17:04, Adrian Tymes wrote: > Technically, what you speak of is agnosticism. > > Agnostic = absence of belief > Atheist = belief of absence > That's the difference between the terms. That's one definition, but not the only possible one. Both terms are in fact fairly ambiguous. A-gnostic is simply a counterpoint to Gnostic (i.e. not-gnostic, not-knowing), and a-theist is similarly 'not adhering to a theism'. Neither is necessarily a 'weak' or 'strong' position, unless it is further defined to be so for the purposes of a particular argument. As in any argument, best define terms at the outset. An interesting variant I once heard is that agnostics don't claim we *don't* know, but that we *can't* know. I suppose that would be a case of weak vs strong agnosticism. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Jan 25 17:17:10 2011 From: sparge at gmail.com (Dave Sill) Date: Tue, 25 Jan 2011 12:17:10 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <131505.6323.qm@web114402.mail.gq1.yahoo.com> Message-ID: On Tue, Jan 25, 2011 at 12:04 PM, Adrian Tymes wrote: > Technically, what you speak of is agnosticism. > > Agnostic = absence of belief > > Atheist = belief of absence > > That's the difference between the terms. > No. From http://en.wikipedia.org/wiki/Atheism : Atheism, in a broad sense, is the rejection of belief in the existence of deities. In a narrower sense, atheism is specifically the position that there are no deities. Most inclusively, atheism is simply the absence of belief that any deities exist. Atheism is contrasted with theism, which in its most general form is the belief that at least one deity exists. The term atheism originated from the Greek ????? (atheos), meaning "without god", which was applied with a negative connotation to those thought to reject the gods worshipped by the larger society. With the spread of freethought, skeptical inquiry, and subsequent increase in criticism of religion, application of the term narrowed in scope. The first individuals to identify themselves as "atheist" appeared in the 18th century. Atheists tend to lean toward skepticism regarding supernatural claims, citing a lack of empirical evidence. Atheists have offered several rationales for not believing in any deity. These include the problem of evil, the argument from inconsistent revelations, and the argument from nonbelief. Other arguments for atheism range from the philosophical to the social to the historical. Although some atheists have adopted secular philosophies, there is no one ideology or set of behaviors to which all atheists adhere. In Western culture, atheists are frequently assumed to be exclusively irreligious or unspiritual. However, atheism also figures in certain religious and spiritual belief systems, such as Buddhism, Hinduism and Jainism. Jainism and some forms of Buddhism do not advocate belief in gods, whereas Hinduism holds atheism to be valid, but difficult to follow spiritually. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at alice.it Tue Jan 25 19:22:45 2011 From: scerir at alice.it (scerir) Date: Tue, 25 Jan 2011 20:22:45 +0100 Subject: [ExI] energy catalyzer In-Reply-To: <761045.68100.qm@web30107.mail.mud.yahoo.com> References: <761045.68100.qm@web30107.mail.mud.yahoo.com> Message-ID: apparently, a very strange phenomenon http://www.journal-of-nuclear-physics.com/ there is a paper here http://www.journal-of-nuclear-physics.com/files/Levi%20and%20Bianchini%20Reports.pdf From thespike at satx.rr.com Tue Jan 25 19:44:22 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 25 Jan 2011 13:44:22 -0600 Subject: [ExI] energy catalyzer In-Reply-To: References: <761045.68100.qm@web30107.mail.mud.yahoo.com> Message-ID: <4D3F2816.3000905@satx.rr.com> On 1/25/2011 1:22 PM, scerir wrote: > apparently, a very strange phenomenon > http://www.journal-of-nuclear-physics.com/ > > there is a paper here > http://www.journal-of-nuclear-physics.com/files/Levi%20and%20Bianchini%20Reports.pdf ============= Forbidden You don't have permission to access /files/Levi and Bianchini Reports.pdf on this server. ============= Btw, this "journal" is the blog of the latest "cold fusion" duo. From spike66 at att.net Tue Jan 25 20:01:36 2011 From: spike66 at att.net (spike) Date: Tue, 25 Jan 2011 12:01:36 -0800 Subject: [ExI] energy catalyzer In-Reply-To: References: <761045.68100.qm@web30107.mail.mud.yahoo.com> Message-ID: <000601cbbcca$ac70cb40$055261c0$@att.net> ... On Behalf Of scerir Subject: [ExI] energy catalyzer apparently, a very strange phenomenon http://www.journal-of-nuclear-physics.com/ there is a paper here http://www.journal-of-nuclear-physics.com/files/Levi%20and%20Bianchini%20Rep orts.pdf I am so certain this is bogus, I make the following comment: if this is real cold fusion, my physics textbook from college goes in the trash. If something in there is so fundamentally wrong, I can't trust the rest of it either. I will go from being a fundamentalist physicist to being a flaming aphysicist. I don't think my physics textbook is in any danger. spike From scerir at alice.it Tue Jan 25 20:24:25 2011 From: scerir at alice.it (scerir) Date: Tue, 25 Jan 2011 21:24:25 +0100 Subject: [ExI] energy catalyzer In-Reply-To: <4D3F2816.3000905@satx.rr.com> References: <761045.68100.qm@web30107.mail.mud.yahoo.com> <4D3F2816.3000905@satx.rr.com> Message-ID: <75DB128F4D87488B9B30540615320AC9@PCserafino> >> there is a paper here >> http://www.journal-of-nuclear-physics.com/files/Levi%20and%20Bianchini%20Reports.pdf > Forbidden > You don't have permission to access /files/Levi and Bianchini > Reports.pdf on this server. here it works, I'll send the paper privately (but I did not read it!) > Btw, this "journal" is the blog of the latest "cold fusion" duo. so, it is the old 'gang'! http://www.zpenergy.com/modules.php?name=News&file=article&sid=3249 From atymes at gmail.com Tue Jan 25 20:41:06 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 25 Jan 2011 12:41:06 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <131505.6323.qm@web114402.mail.gq1.yahoo.com> Message-ID: 2011/1/25 Amon Zero : > On 25 January 2011 17:04, Adrian Tymes wrote: >> Technically, what you speak of is agnosticism. >> >> Agnostic = absence of belief >> Atheist = belief of absence >> That's the difference between the terms. > > That's one definition, but not the only possible one. It is the one I find in certain online dictionaries: http://dictionary.reference.com/browse/atheist a person who denies or disbelieves the existence of a supreme being or beings. http://dictionary.reference.com/browse/agnostic a person who holds that the existence of the ultimate cause, as god, and the essential nature of things are unknown and unknowable, or that human knowledge is limited to experience. From nebathenemi at yahoo.co.uk Tue Jan 25 20:51:15 2011 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Tue, 25 Jan 2011 20:51:15 +0000 (GMT) Subject: [ExI] Faecal transplant. No, really. In-Reply-To: Message-ID: <673313.40213.qm@web27005.mail.ukl.yahoo.com> I read an article in New Scientist this week about how doctors were experimenting on treating a variety of conditions by changing their intestinal bacteria via "faecal transplant". I was a little skeptical, but the research quoted exists. http://en.wikipedia.org/wiki/Fecal_bacteriotherapy has a good set of references with links to papers on treating bowel conditions. However, the New Scientist article includes experimental therapy for Parkinson's, which makes you wonder how much interaction there is between our brain, our immmune system and the vast ecosystem that is our digestive tract. Tom From pharos at gmail.com Tue Jan 25 21:33:30 2011 From: pharos at gmail.com (BillK) Date: Tue, 25 Jan 2011 21:33:30 +0000 Subject: [ExI] energy catalyzer In-Reply-To: <75DB128F4D87488B9B30540615320AC9@PCserafino> References: <761045.68100.qm@web30107.mail.mud.yahoo.com> <4D3F2816.3000905@satx.rr.com> <75DB128F4D87488B9B30540615320AC9@PCserafino> Message-ID: On Tue, Jan 25, 2011 at 8:24 PM, scerir wrote: >>> there is a paper here >>> >>> http://www.journal-of-nuclear-physics.com/files/Levi%20and%20Bianchini%20Reports.pdf > >> Forbidden >> You don't have permission to access /files/Levi and Bianchini Reports.pdf >> on this server. > > here it works, I'll send the paper privately (but I did not read it!) >> >> Btw, this "journal" is the blog of the latest "cold fusion" duo. > > so, it is the old 'gang'! > http://www.zpenergy.com/modules.php?name=News&file=article&sid=3249 > > I think you have to go to the Journal Home Page first. It is the first Download Here link. Some web sites insist that you go to the Home Page first because they want to count visitors and track what you do on the site. BillK From darren.greer3 at gmail.com Tue Jan 25 22:43:01 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 25 Jan 2011 18:43:01 -0400 Subject: [ExI] Faecal transplant. No, really. In-Reply-To: <673313.40213.qm@web27005.mail.ukl.yahoo.com> References: <673313.40213.qm@web27005.mail.ukl.yahoo.com> Message-ID: >I read an article in New Scientist this week about how doctors were experimenting on treating a variety of conditions by changing their intestinal bacteria via "faecal transplant". I was a little skeptical, but the research quoted exists.< It's now fairly common hospital practice for treating c. difficile and other intestinal bacterial infections in Canada. A freind was just didcussing today having performed the procedure recently. Mostly in an effort to gross me out while I was eating. Worked too. d. On Tue, Jan 25, 2011 at 4:51 PM, Tom Nowell wrote: > > http://en.wikipedia.org/wiki/Fecal_bacteriotherapy has a good set of > references with links to papers on treating bowel conditions. However, the > New Scientist article includes experimental therapy for Parkinson's, which > makes you wonder how much interaction there is between our brain, our > immmune system and the vast ecosystem that is our digestive tract. > > Tom > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at alice.it Wed Jan 26 06:52:13 2011 From: scerir at alice.it (scerir) Date: Wed, 26 Jan 2011 07:52:13 +0100 Subject: [ExI] energy catalyzer In-Reply-To: <000601cbbcca$ac70cb40$055261c0$@att.net> References: <761045.68100.qm@web30107.mail.mud.yahoo.com> <000601cbbcca$ac70cb40$055261c0$@att.net> Message-ID: > I make the following comment: if this is real cold fusion, > my physics textbook from college goes in the trash. > spike They (Focardi and Rossi) do not claim it is "cold fusion" of atoms..They cannot explain the physics of the experiment. So they use the term "energy catalyzer" instead. So it has more to do with chemistry than with nuclear physics. Somebody said "piezo-chemistry"! s. From darren.greer3 at gmail.com Wed Jan 26 14:29:28 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 26 Jan 2011 10:29:28 -0400 Subject: [ExI] energy catalyzer In-Reply-To: References: <761045.68100.qm@web30107.mail.mud.yahoo.com> Message-ID: >On Tue, Jan 25, 2011 at 3:22 PM, scerir wrote: > apparently, a very strange phenomenon > http://www.journal-of-nuclear-physics.com/ > > there is a paper here > > http://www.journal-of-nuclear-physics.com/files/Levi%20and%20Bianchini%20Reports.pdf > < Thanks to my physics, chemistry and math courses and I can actually now understand some of this stuff. Not much of it, but some. Six months ago, if you had asked me what Avagadro's number was, I might have said something to do with a high yield of vegetables in the fall. :) Speaking of which, I was sent a paper on non-Pythagorean music theory called the Harmonic Matrix Theory that I'm interested in. A friend is performing a piece based on this theory (that he developed) at the New Music Festival in Canada this week. I told him I would look at his paper, written for the international computer music symposium next year in London, and show it around. But it's too much for me. I don't have the music knowledge or the math skills to be able to decipher it. Any music theory/math gurus in here who might like to take a look and give feedback? It's a short paper -- 1000 wds. I'm curious as to how the theory stands up to heavy math scrutiny. Because it's a brand new theory it hasn't yet been written about, and has only been once presented at Columbia University and somewhere in Italy. I can't simply look it up on Wikipedia. Think of it as a contribution to the ever-shrinking divide between science and art. I'll send it via direct e-mail if there are any takers. Darren > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 26 15:20:42 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 07:20:42 -0800 Subject: [ExI] energy catalyzer In-Reply-To: References: <761045.68100.qm@web30107.mail.mud.yahoo.com> Message-ID: <004501cbbd6c$997ee920$cc7cbb60$@att.net> . On Behalf Of Darren Greer . there is a paper here http://www.journal-of-nuclear-physics.com/files/Levi%20and%20Bianchini%20Rep orts.pdf< >. Six months ago, if you had asked me what Avagadro's number was, I might have said something to do with a high yield of vegetables in the fall. :) . Close. It was more about crop loss to pests, i.e. how many of those vegetables are in a mole. >. Think of it as a contribution to the ever-shrinking divide between science and art. I'll send it via direct e-mail if there are any takers.Darren Sounds interesting. I couldn't get the link to work. Do forward the paper thanks. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Jan 26 18:39:25 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 26 Jan 2011 13:39:25 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds. In-Reply-To: <4D3DC494.1040001@lightlink.com> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <4D39988F.90004@lightlink.com> <4D3DC494.1040001@lightlink.com> Message-ID: <7C06856E-D56D-4732-8E27-AB7AFA2633D2@bellsouth.net> On Jan 24, 2011, at 1:27 PM, Richard Loosemore wrote: > I don't give a fig if some cellular automaton might do in the next 10 gigayears Wow, 10 billion years is a long time, but let's see, the signals in the human brain move at about 10 meters a second, in a AI they would move at 300,000,000 meters a second, 30 million times faster, so it would take the AI a little over 300 years to do it, that's still a long time. But the AI would have more hardware not just faster hardware working on the problem. The human brain is about 15 cm across so a AI could have a brain 45,000 meters across with no more delay between one part of its brain and another than we have with our human brain. With Nanotechnology you should be able to fit the complexity of a human brain into one cubic centimeter. So in a sphere with a radius of 22,500 meters you could fit in 2.7 *10^19 of them. So it will take the AI .35 nanoseconds to solve your 10 billion year long problem. Of course with cooling and other engineering problems you might only be able to pack a tenth as much stuff in, then it would take a glacial 3.5 nanoseconds to bring on the singularity. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 26 21:32:42 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 13:32:42 -0800 Subject: [ExI] sexy news anchors distracting Message-ID: <00ac01cbbda0$90c57db0$b2507910$@att.net> These kinds of studies really annoy me. The headline is false, exactly the opposite of the truth. This one says sexy news anchors are distracting, when of course exactly the opposite is true. The sexy news anchors are not distracting at all. Its all their yakkity yak and bla bla that is the distraction, not the news anchor. I don't know what that other is about. http://www.aolnews.com/2011/01/26/study-male-viewers-find-sexy-news-anchors- distracting/?test=latestnews Study: Male Viewers Find Sexy News Anchors Distracting Men watching a news broadcast are more likely to pay attention if a sexy anchor is delivering the day's stories. Just what they're captivated by, however, is another matter entirely. According to a new study from researchers at Indiana University, male viewers snap to attention at the sight of a female anchor they find attractive, but are distracted by her looks and therefore less likely to remember what she had to say. }8-[ spike {8^D -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Jan 26 21:36:13 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 13:36:13 -0800 Subject: [ExI] sad news: uncle milton has perished Message-ID: <00b401cbbda1$0ececf90$2c6c6eb0$@att.net> How many of us had one or more of his creations when we were larvae? I had two then and one as a full grown adult. Uncle Milton will be missed, I do hope he had himself frozen. Milton Levine, who co-invented classic Ant Farm educational toy, dies at 97 in California http://www.google.com/hostednews/canadianpress/article/ALeqM5hpAKWszlDhIyqQ3 o_D4MgPPuPhwA?docId=5765195 By The Associated Press (CP) - 1 hour ago LOS ANGELES, Calif. - Milton Levine, co-inventor of the classic Ant Farm toy that gave millions of youngsters a sneak peak into the underground lives of insects, has died at age 97. Levine died of natural causes on Jan. 16 at an assisted-care facility in Thousand Oaks, his son, Steven, told the Los Angeles Times. Uncle Milton's Ant Farm has sold more than 20 million copies, but it sprang from humble origins. Levine was watching ants during a Fourth of July picnic in Studio City in 1956 when he was reminded of collecting ants in jars as a child, Levine told the Times in 2002. He recalled announcing: "We should make an antarium." Levine and his brother-in-law, E. J. Cossman, came up with a transparent habitat - a green plastic frame with a whimsical farm scene - that allowed people to watch ants dig tunnels in sand between two plastic panes. The ants were sent by mail. Collectors got a penny apiece to grab red harvester ants from the Mojave Desert. "Ants work day and night, they look out for the common good and never procrastinate," Levine told the Times. "Humanity can learn a lot from the ant." The toy was an instant hit. The product has remained essentially the same over the decades, although some small changes were made. The original glue was toxic to some ants, so it was replaced. The sand was switched to whitish volcanic ash in order to make the ants more visible. "The product has become a treasured part of American pop culture, having been recognized as one of the Top 100 Toys of the Century by the Toy Industry Association," according to a statement from Westlake Village-based Uncle Milton Industries. Levine's company became a multimillion-dollar business and today offers a range of science and nature toys, including butterfly and frog habitats and Star Wars-themed items. It was sold to Transom Capital Group last year for tens of millions of dollars. Levine sometimes joked that the ants' most amazing feat was putting his three children through college. In addition to his son, Levine is survived by his wife, Mauricette, daughters Harriet and Ellen; two sisters and three grandchildren. -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Jan 26 22:01:57 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 26 Jan 2011 18:01:57 -0400 Subject: [ExI] sexy news anchors distracting In-Reply-To: <00ac01cbbda0$90c57db0$b2507910$@att.net> References: <00ac01cbbda0$90c57db0$b2507910$@att.net> Message-ID: Spike wrote: >This one says sexy news anchors are distracting, when of course exactly the opposite is true..< There was a short-lived program called "Naked News" on some network in the U.S. a dozen years ago with anchors delivering lead-ins in nothing but their bare skin. Saw it once. VERY distracting. In a good way. Anything that takes your mind off the murderous trivia of the evening news is jake with me. d. 2011/1/26 spike > > > These kinds of studies really annoy me. The headline is false, exactly the > opposite of the truth. This one says sexy news anchors are distracting, > when of course exactly the opposite is true. The sexy news anchors are not > distracting at all. Its all their yakkity yak and bla bla that is the > distraction, not the news anchor. I don?t know what that other is about. > > > > > http://www.aolnews.com/2011/01/26/study-male-viewers-find-sexy-news-anchors-distracting/?test=latestnews > > *Study: Male Viewers Find Sexy News Anchors Distracting* > > Men watching a news broadcast are more likely to pay attention if a sexy > anchor is delivering the day's stories. Just what they're captivated by, > however, is another matter entirely. According to a new study from > researchers at Indiana University, male viewers snap to attention at the > sight of a female anchor they find attractive, but are distracted by her > looks and therefore less likely to remember what she had to say? > > > > > > }8-[ > > > > spike > > > > > > > > > > > > {8^D > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Wed Jan 26 22:06:06 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 26 Jan 2011 15:06:06 -0700 Subject: [ExI] sexy news anchors distracting In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> Message-ID: Spike, did you ever encounter a very sexy sunday female school teacher? It really does help the lesson a great deal... John ; ) On 1/26/11, Darren Greer wrote: > Spike wrote: > >>This one says sexy news anchors are distracting, when of course exactly the > opposite is true..< > > There was a short-lived program called "Naked News" on some network in the > U.S. a dozen years ago with anchors delivering lead-ins in nothing but their > bare skin. Saw it once. VERY distracting. In a good way. Anything that takes > your mind off the murderous trivia of the evening news is jake with me. > > d. > > 2011/1/26 spike > >> >> >> These kinds of studies really annoy me. The headline is false, exactly >> the >> opposite of the truth. This one says sexy news anchors are distracting, >> when of course exactly the opposite is true. The sexy news anchors are >> not >> distracting at all. Its all their yakkity yak and bla bla that is the >> distraction, not the news anchor. I don?t know what that other is about. >> >> >> >> >> http://www.aolnews.com/2011/01/26/study-male-viewers-find-sexy-news-anchors-distracting/?test=latestnews >> >> *Study: Male Viewers Find Sexy News Anchors Distracting* >> >> Men watching a news broadcast are more likely to pay attention if a sexy >> anchor is delivering the day's stories. Just what they're captivated by, >> however, is another matter entirely. According to a new study from >> researchers at Indiana University, male viewers snap to attention at the >> sight of a female anchor they find attractive, but are distracted by her >> looks and therefore less likely to remember what she had to say? >> >> >> >> >> >> }8-[ >> >> >> >> spike >> >> >> >> >> >> >> >> >> >> >> >> {8^D >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > -- > *"It's supposed to be hard. If it wasn't hard everyone would do it. The > 'hard' is what makes it great."* > * > * > *--A League of Their Own > * > From spike66 at att.net Wed Jan 26 22:40:36 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 14:40:36 -0800 Subject: [ExI] sexy news anchors distracting In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> Message-ID: <002301cbbdaa$0d196850$274c38f0$@att.net> . On Behalf Of Darren Greer Subject: Re: [ExI] sexy news anchors distracting Spike wrote: >>This one says sexy news anchors are distracting, when of course exactly the opposite is true..< >.There was a short-lived program called "Naked News" on some network in the U.S. a dozen years ago with anchors delivering lead-ins in nothing but their bare skin. WHAT? Short lived? Did it die? Damn this IS a sad day. I assumed Nekkid News was still around. I get clips in the email from friends on a regular basis. I suppose if I listened to the actual story I might be able to determine the approximate time frame. It is hard to do however. >. Saw it once. VERY distracting. In a good way. Hey wait a minute, You found her distracting? {8^D Darren I was having lunch with a colleague a few years ago. Our waitress was drop dead gorgeous. We were both admiring her. I said, "Hey pal, keep your eyes in your head, she's my orientation, not yours." He said, "I'm only gay, not dead." {8^D I was laughing so hard at that comeback I could scarcely finish my lunch for fear of beer spewing out my nose. >. Anything that takes your mind off the murderous trivia of the evening news is jake with me. d. Jake who? She told me her name was Jill on our last date, but it didn't get far enough for me to determine if it was really Jake in a very convincing disguise. I wasn't distracted or anything: http://www.google.com/imgres?imgurl=http://style.popcrunch.com/wp-content/up loads/2008/09/27283.jpg&imgrefurl=http://www.ourpoliticsblog.com/huhu/fox-10 -news-anchors.html&usg=__i4GhVAJTxpAajrvK_CfF_DONClo=&h=600&w=800&sz=216&hl= en&start=0&sig2=MyzbIyuzCslHW9P1FR-t5A&zoom=1&tbnid=vrIS1twJqLXZ2M:&tbnh=151 &tbnw=188&ei=7qFATa7iPIWCsQOfnYCyCA&prev=/images%3Fq%3Dfox%2Bnews%2Banchors% 26um%3D1%26hl%3Den%26sa%3DX%26biw%3D1054%26bih%3D685%26tbs%3Disch:1&um=1&itb s=1&iact=hc&vpx=463&vpy=319&dur=9720&hovh=194&hovw=259&tx=104&ty=100&oei=7qF ATa7iPIWCsQOfnYCyCA&esq=1&page=1&ndsp=16&ved=1t:429,r:8,s:0 It is difficult to understand why we still need news in either hard copy or audio copy. Why? We have computers now, what is the other for? How does it help us? Audio/video news has evolved into a form best described as the softest core pornography known, the only form of pornography still considered completely legitimate even by the church ladies. It's a hell of a world we live in, ja? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Wed Jan 26 22:38:53 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 26 Jan 2011 17:38:53 -0500 Subject: [ExI] Volitional Longevity In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> Message-ID: <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> I recall a couple of weeks ago, someone was posting about a phrase on DIY health and life extension / transmortality. The phrase had the world "volitional" or "volition" in it. Do you all remember this phrase and who use it? url? Many thanks, Natasha From possiblepaths2050 at gmail.com Wed Jan 26 22:02:53 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 26 Jan 2011 15:02:53 -0700 Subject: [ExI] sad news: uncle milton has perished In-Reply-To: <00b401cbbda1$0ececf90$2c6c6eb0$@att.net> References: <00b401cbbda1$0ececf90$2c6c6eb0$@att.net> Message-ID: I owned several of his ant farms as a kid. I will never forget accidentally knocking one over and having my mom yell as the many ants scurried all over my bedroom! John ; ) On 1/26/11, spike wrote: > > > How many of us had one or more of his creations when we were larvae? I had > two then and one as a full grown adult. Uncle Milton will be missed, I do > hope he had himself frozen. > > > > Milton Levine, who co-invented classic Ant Farm educational toy, dies at 97 > in California > > http://www.google.com/hostednews/canadianpress/article/ALeqM5hpAKWszlDhIyqQ3 > o_D4MgPPuPhwA?docId=5765195 > > > > By The Associated Press (CP) - 1 hour ago > > > > LOS ANGELES, Calif. - Milton Levine, co-inventor of the classic Ant Farm > > toy that gave millions of youngsters a sneak peak into the underground > > lives of insects, has died at age 97. > > > > Levine died of natural causes on Jan. 16 at an assisted-care facility in > > Thousand Oaks, his son, Steven, told the Los Angeles Times. > > > > Uncle Milton's Ant Farm has sold more than 20 million copies, but it > > sprang from humble origins. > > > > Levine was watching ants during a Fourth of July picnic in Studio City in > > 1956 when he was reminded of collecting ants in jars as a child, Levine > > told the Times in 2002. > > > > He recalled announcing: "We should make an antarium." > > > > Levine and his brother-in-law, E. J. Cossman, came up with a transparent > > habitat - a green plastic frame with a whimsical farm scene - that allowed > > people to watch ants dig tunnels in sand between two plastic panes. > > > > The ants were sent by mail. Collectors got a penny apiece to grab red > > harvester ants from the Mojave Desert. > > > > "Ants work day and night, they look out for the common good and never > > procrastinate," Levine told the Times. "Humanity can learn a lot from the > > ant." > > > > The toy was an instant hit. The product has remained essentially the same > > over the decades, although some small changes were made. The original glue > > was toxic to some ants, so it was replaced. The sand was switched to > > whitish volcanic ash in order to make the ants more visible. > > > > "The product has become a treasured part of American pop culture, having > > been recognized as one of the Top 100 Toys of the Century by the Toy > > Industry Association," according to a statement from Westlake > > Village-based Uncle Milton Industries. > > > > Levine's company became a multimillion-dollar business and today offers a > > range of science and nature toys, including butterfly and frog habitats > > and Star Wars-themed items. It was sold to Transom Capital Group last year > > for tens of millions of dollars. > > > > Levine sometimes joked that the ants' most amazing feat was putting his > > three children through college. > > > > In addition to his son, Levine is survived by his wife, Mauricette, > > daughters Harriet and Ellen; two sisters and three grandchildren. > > > > -- > > > > > > > > From spike66 at att.net Wed Jan 26 22:44:51 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 14:44:51 -0800 Subject: [ExI] sexy news anchors distracting In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> Message-ID: <002801cbbdaa$a4ff5d00$eefe1700$@att.net> ... On Behalf Of John Grigg Subject: Re: [ExI] sexy news anchors distracting >...Spike, did you ever encounter a very sexy sunday female school teacher? It really does help the lesson a great deal...John ; ) Lesson? What lesson? {8^D John I expect the evolutionary psychology crowd will eventually discover something I have known for a very long time: most religious belief as practiced in churches is actually far more about sex than about a divine being. Keith might comment here. spike From darren.greer3 at gmail.com Wed Jan 26 23:12:01 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 26 Jan 2011 19:12:01 -0400 Subject: [ExI] sexy news anchors distracting In-Reply-To: <00ac01cbbda0$90c57db0$b2507910$@att.net> References: <00ac01cbbda0$90c57db0$b2507910$@att.net> Message-ID: >He said, ?I?m only gay, not dead.? < That is without a doubt the funniest thing I've ever read on exi. Oh my newton, that's hilarious. Thanks buddy, for making my night. I'd love to meet that guy! d. 2011/1/26 spike > > > These kinds of studies really annoy me. The headline is false, exactly the > opposite of the truth. This one says sexy news anchors are distracting, > when of course exactly the opposite is true. The sexy news anchors are not > distracting at all. Its all their yakkity yak and bla bla that is the > distraction, not the news anchor. I don?t know what that other is about. > > > > > http://www.aolnews.com/2011/01/26/study-male-viewers-find-sexy-news-anchors-distracting/?test=latestnews > > *Study: Male Viewers Find Sexy News Anchors Distracting* > > Men watching a news broadcast are more likely to pay attention if a sexy > anchor is delivering the day's stories. Just what they're captivated by, > however, is another matter entirely. According to a new study from > researchers at Indiana University, male viewers snap to attention at the > sight of a female anchor they find attractive, but are distracted by her > looks and therefore less likely to remember what she had to say? > > }8-[ > > > > spike > > > > > > > > > > > > {8^D > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Jan 26 23:28:50 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 26 Jan 2011 19:28:50 -0400 Subject: [ExI] sexy news anchors distracting In-Reply-To: <002801cbbdaa$a4ff5d00$eefe1700$@att.net> References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <002801cbbdaa$a4ff5d00$eefe1700$@att.net> Message-ID: No need for Keith. Church is all about justifying sex. When I was a kid in Sunday school I used to look around and couldn't help but imagine all the teachers having sex with their spouses. Maybe that's why I tunred out gay? d. On Wed, Jan 26, 2011 at 6:44 PM, spike wrote: > ... On Behalf Of John Grigg > Subject: Re: [ExI] sexy news anchors distracting > > >...Spike, did you ever encounter a very sexy sunday female school teacher? > It really does help the lesson a great deal...John ; ) > > > Lesson? What lesson? {8^D > > John I expect the evolutionary psychology crowd will eventually discover > something I have known for a very long time: most religious belief as > practiced in churches is actually far more about sex than about a divine > being. > > Keith might comment here. > > spike > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Jan 26 23:47:36 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 26 Jan 2011 18:47:36 -0500 Subject: [ExI] sexy news anchors distracting In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <002801cbbdaa$a4ff5d00$eefe1700$@att.net> Message-ID: 2011/1/26 Darren Greer : > No need for Keith. Church is all about justifying sex. When I was a kid in > Sunday school I used to look around and couldn't help but imagine all the > teachers having sex with their spouses. Maybe that's why I tunred out gay? "no need for Keith" - as if that's an excessive use of resource. :) I just thought it was an amusing turn of phrase. 'might as well take the opportunity to express my thanks that this community of intellectuals provides a welcome influx of elevated thinking into an otherwise mundane day-to-day. We really do live in exciting times. From msd001 at gmail.com Wed Jan 26 23:19:37 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 26 Jan 2011 18:19:37 -0500 Subject: [ExI] Limiting factors of intelligence explosion speeds. In-Reply-To: <7C06856E-D56D-4732-8E27-AB7AFA2633D2@bellsouth.net> References: <4D38201E.8040703@aleph.se> <4D388CA8.60907@lightlink.com> <4D39988F.90004@lightlink.com> <4D3DC494.1040001@lightlink.com> <7C06856E-D56D-4732-8E27-AB7AFA2633D2@bellsouth.net> Message-ID: 2011/1/26 John Clark : > problem. Of course with cooling and other engineering problems you might > only be able to pack a tenth as much stuff in, then it would take a glacial > 3.5 nanoseconds to bring on the singularity. but I wanted the singularity in 2 nanoseconds, isn't there any way to speed things up? Maybe turn off antivirus or something? From phoenix at ugcs.caltech.edu Thu Jan 27 01:20:11 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 26 Jan 2011 17:20:11 -0800 Subject: [ExI] mass transit again In-Reply-To: <008f01cbb938$65860460$30920d20$@att.net> References: <20110120220122.d32794d095cdfcc0018508d9c136b552.d969e76a73.wbe@email09.secureserver.net> <008f01cbb938$65860460$30920d20$@att.net> Message-ID: <20110127012011.GA5856@ofb.net> On Thu, Jan 20, 2011 at 10:56:57PM -0800, spike wrote: > Ja and I thought of something else too. It could be that one particular > route is bad for having few riders. It goes to that big industrial park There's also the possibility that San Jose has not done public transit well. This does not cancel out the many other places, some of them even American, that have done it well. But having an low-volume route is also possible. A really good public transit system has to be competitive with cars for convenience, which means going lots of places lots of times; this may often mean not many people on board, especially in a sprawling area. Is that wasteful, or is it the price paid for convenience? A private car is typically parked for at least 20 out of 24 hours per day, and modally carries a single person, its driver, and generally less than its full capacity. Isn't that wasteful? Aren't you inclined to defend that waste on the grounds of convenience or comfort to the car owner and family? So it goes. A convenient system of private cars has tons of cars sitting around taking up space; a convenient system of public transit has tons of vehicles moving around, even if sometimes not fully packed. 50 miles to your north is a much better public transit system, despite being far from top of the line itself. > allow a light rail car to pass with two persons aboard, one of whom is > driving. That's how we get such an attitude. > This is the only train I see on a regular basis. I can imagine there are > other lines that are more heavily used. That one to Moffett should probably It strikes me yet again that extropians and SF fans can be great dreamers about space and the future, and quite parochial down on Earth. -xx- Damien X-) From phoenix at ugcs.caltech.edu Thu Jan 27 01:51:24 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 26 Jan 2011 17:51:24 -0800 Subject: [ExI] mass transit again In-Reply-To: <20110127012011.GA5856@ofb.net> References: <20110120220122.d32794d095cdfcc0018508d9c136b552.d969e76a73.wbe@email09.secureserver.net> <008f01cbb938$65860460$30920d20$@att.net> <20110127012011.GA5856@ofb.net> Message-ID: <20110127015124.GA15099@ofb.net> On Wed, Jan 26, 2011 at 05:20:11PM -0800, Damien Sullivan wrote: > On Thu, Jan 20, 2011 at 10:56:57PM -0800, spike wrote: > > > Ja and I thought of something else too. It could be that one particular > > route is bad for having few riders. It goes to that big industrial park > > There's also the possibility that San Jose has not done public transit > well. This does not cancel out the many other places, some of them even > American, that have done it well. There's also possibilities that it picks up riders further along its route, and you're just seeing it at an endpoint. And that the planners are thinking long term: no riders yet, but as the system grow... As for public transit in general, a big question is what is it for? In the US, it's commonly a second or even third class option, essentially charity for those who can't drive: too young, old, sick, otherwise incapable of driving, or too poor to afford a car. Like much of the American safety net, it's barely adequate at best. A suburban bus may run once an hour and not at all after 6 or 8pm, figuring retirees can wait and poor people should go live somewhere else. This is way different from a system designed to be more efficient in space, energy, and labor than cars, and to free as many people as possible from the need to drive. Works better with denser living, of course, and with making public transit higher priority than driving convenience. Looks like bus and rail lines every half-mile or less, running every 5 minutes or less, running from 5am to 1am, or even 24 hours (though usually not at the highest frequency late at night.) Done really well, the buses and street rail have their own lanes and signal priority, so can move faster than cars in traffic, as does metro. This isn't a pipe dream, but exists in various forms in various cities. Smaller cities might just have buses running every 10 minutes, not as good but still decent. -xx- Damien X-) From jrd1415 at gmail.com Thu Jan 27 02:13:38 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 26 Jan 2011 19:13:38 -0700 Subject: [ExI] Help with freezing phenomenon Message-ID: Recently, there has been discussion in the CI Google group about the CAS (cells alive system) freezing system being distributed commercially by a Japanese company, ABI. Originally intended for food preservation, the method claims much higher quality preservation, approaching "fresh". It was invented in 1996, publicized in an article in Forbes in 2008. http://www.forbes.com/forbes/2008/0602/076.html, and came to my attention last year. I googled it, read about it, and sent an inquiry to the company. As a cryonicist my interest is obvious, so I payed particular attention to the organ preservation angle. Then this week another article on a site called Singularity Hub. http://singularityhub.com/2011/01/23/food-freezing-technology-preserves-human-teeth-organs-next/ -- features the technology once again. This time tooth freezing with a transplant success of 87 percent. A very short (and potentially very 'dirty') description of the process: a valuable bluefin tuna is caught and put in the CAS freezer on the boat. The cooling airflow temp is say -10 degrees C. (Higher than the -40C typical of flash freezing.) Somewhere in the vicinity of the cooling chamber (probably around it) is a mechanism that produces a varying magnetic/electric field. The influence of this "field" during the cooling process is claimed to delay the onset of freezing. This may sometimes be phrased as "depressing the freezing point". Then, when the entire tuna is at a uniform (or maybe just close) temperature below the "undepressed" freezing point, the field is turned off, and the entire tuna 'flashes' solid. There are guesses -- first generation?; sure to be wrong?) --that the ice is in the form of small granules that substantially reduces freeze damage. I need more damn details, or I'm gonna embarrass myself. That should get you started. Now here's my question. The thermal conductivity of water is 0.58 kW/mK. Of ice, 1.12 kW/mK (increases with decrease in temp.). Does this mean that given similar geometry, water will cool more slowly than ice? Does this mean that the conventional flash-frozen tuna will begin to cool ***more quickly*** as the outer layers freeze solid, and then freeze ever more quickly as the frozen 'shell' thickens? Does this mean that the CAS process ***SLOWS*** the flow of heat out of the frozen tuna, precisely because it maintains the tuna in a "liquid" -- ie not solidified state -- as it is cooled. I ask because I was expecting the reverse. Somehow I got the notion that the outer shell of ice (conventional freezing) would slow the cooling rate, compared to the "shell" of fluid tissue kept unfrozen until the end point of the CAS process. ****************************************** Water isn't tissue. And convection, ...what about convection?! Make believe there is no such thing as convection. I know, I know. ***************************************************** Cryopreservation of periodontal ligament cells with magnetic field for tooth banking http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WD5-5033XY2-1&_user=10&_coverDate=08/31/2010&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=4873b90b166ee18578b31651f180d459&searchtype=a ******************************************************* Best, jeff davis "Everything's hard till you know how to do it." Ray Charles From spike66 at att.net Thu Jan 27 05:04:24 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 21:04:24 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: Message-ID: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> >...A very short (and potentially very 'dirty') description of the process: a valuable bluefin tuna is caught and put in the CAS freezer on the boat...--that the ice is in the form of small granules that substantially reduces freeze damage...Does this mean that given similar geometry, water will cool more slowly than ice?... Best jeff davis Jeff, this is interesting but your explanation may be wandering off in the wrong direction. Do indulge me for a minute please. Water expands as it freezes. In a conventional freezer the outer layers freeze first, so as the internal layers freeze and expand, the outer layers are placed in tension and are filled with tiny cracks. We might suppose the presence of microcracks in fish could degrade its sushiability. If one uses a microwave oven, one may observe that the food cooks from the inside out, as opposed to the convection oven where it is reversed. In the process you describe, I can imagine a system whereby the center of the tissue (a fish or a valuable brain for instance) is kept artificially warm by very low level microwaves while the entire mass is very gradually cooled to just slightly below freezing. Then a second synchronized and 180 degrees out of phase microwave source could be injected from the other direction. At the center of the fish or valuable brain, destructive interference cancels the microwaves starting at the center. Then the center freezes first and slowly, as thepower of the pi-phase-synchronized microwave sources are gradually decreased. That way, the center freezes first. The frozen portion then expands slightly against a still unfrozen outer portion, which eventually freezes solid without the microcracking, maintaining the sushiability of the meat. Preserved could also perhaps be valuable knowledge, if we are talking about a brain with something actually in it. We could experiment with this theory by scavenging a couple of magnetrons from two identical junky old microwave ovens. I know how to synchronize them and control their power. So we could see if we could take a bowl of water with these magnetrons on either side and freeze it in such a way that it is perfectly clear as opposed to milky white with jillions of cracks. This would be like an icicle which freezes from inside out, as opposed to a usual bowl of water in the freezer, which freezes on top first then freezes inward. Perhaps Max's staff might want to join our efforts in finding a better way to preserve sushi and valuable brains. spike From spike66 at att.net Thu Jan 27 05:56:36 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 21:56:36 -0800 Subject: [ExI] Help with freezing phenomenon References: Message-ID: <001001cbbde6$f5b41a60$e11c4f20$@att.net> From: spike [mailto:spike66 at att.net] ... >...Water expands as it freezes. In a conventional freezer the outer layers freeze first, so as the internal layers freeze and expand, the outer layers are placed in tension and are filled with tiny cracks. We might suppose the presence of microcracks in fish could degrade its sushiability...spike And of course microcracks wouldn't do a brain any good either, so let me *expand* on the idea a bit. The recent tragic shooting of the politician in Arizona reminded me of a common medical procedure in a traumatic brain injury: opening the skull to allow swelling of the injured brain. If a cryonics team were to split the skull front to back to allow slight expansion during the freezing process, there might be a secondary benefit: it would allow insertion of a reflective heat sink between the hemispheres of the brain. The previous post on this topic suggested using destructive interference with pi-phase synchronized low level microwaves. To be more specific, we might shoot for the very high end frequency for tissue heating: about 4 gigaHertz. Then we might imagine inserting a reflective stainless steel plate between the hemispheres of the brain, carefully avoiding damage to the corpus callosum, to act as a heat sink and a microwave reflector as well as a temperature control device. If we collimate the microwave beam with something as simple as a catadioptric collimator, we use the reflected microwaves from the central plate to interfere destructively with the incoming non-reflected beam to let the center of each hemisphere freeze and expand first, before the periphery of the brain. I specified 4 GHz because that makes a wavelength of about 7.5 cm which is about what we want for destructive interference to take place at or near the center of a hemisphere. The presence of the highly conductive polished stainless steel plate would also help with the process control, by allowing extremely precise temperature monitoring and control as the brain is gradually and slightly supercooled to perhaps -5C before tissue freezing begins. Since it doesn't harm a brain to have its hemispheres slightly separated, we might imagine cooling coils inside the stainless steel plate itself. We might also imagine the plate in two or more pieces, to completely accommodate the corpus callosum. None of this is particularly difficult from a technical perspective. The reflector notion greatly simplifies the control task of synchronizing two microwave sources. In this latter scheme, the same collimated microwave beam interferes with itself after being reflected from a central plate inserted between the hemispheres, allowing the center of each hemisphere to cool slightly more than its surrounding tissue, and this entire discussion is making me yearn for sushi. Max, shall we get out the green notebooks and see if Alcor wants to gather up some patentable notions? The reason in this special case actually transcends the making of money (if that can be imagined.) Rather the reason for documenting and patenting would be to prevent some commie from patenting the notions and disallowing us from using them. In this spirit I freely donate any and all intellectual property residing in my particular brain regarding cryonics to Alcor and encourage other hemispheres to do likewise. spike From spike66 at att.net Thu Jan 27 06:12:59 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 22:12:59 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> Message-ID: <001101cbbde9$3fb1aef0$bf150cd0$@att.net> ... On Behalf Of spike ... >...We could experiment with this theory by scavenging a couple of magnetrons from two identical junky old microwave ovens. I know how to synchronize them and control their power...spike After I wrote this, I thought of better way to phase synchronize the microwave beam. Instead of trying to electronically phase synchronize two beams, we could use a beam splitter, then send half the signal off in a longer path than the other half, with the extra path length being an odd integer multiple of the half wavelength of the original beam. Max, shall I desist posting these notions forthwith? Plopping this into the public domain, we run the risk of some yahoo running to the patent office with it, ja? spike From atymes at gmail.com Thu Jan 27 06:29:24 2011 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 26 Jan 2011 22:29:24 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <001001cbbde6$f5b41a60$e11c4f20$@att.net> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> Message-ID: On Wed, Jan 26, 2011 at 9:56 PM, spike wrote: >?The reason in this special case actually > transcends the making of money (if that can be imagined.) ?Rather the reason > for documenting and patenting would be to prevent some commie from patenting > the notions and disallowing us from using them. You might be surprised to learn how common that is. I know that for at least one of my patents, prevention from someone else's selfish monopoly was the reason I got it. As to microwave heating: have you not noticed that microwaves - as commonly used in ovens, anyway - naturally heat the outside more than the inside, without any fancy setup? Seriously, try cooking a fist sized chunk of meat (chicken, ground beef, whatever) straight from the freezer for, say, 5 minutes. (If that cooks the meat entirely, down to the core, you have a powerful oven, so try another chunk for 2:30.) Better if the oven has an auto-rotating platter, but that might not be necessary to observe this. Immediately after, observe that the outside of the meat is warm, then cut into the core and observe the temperature there. Since this is the default behavior, why bother with complicated mechanisms, or inserting things that might damage the brain? Immerse in something just above freezing, reach temperature equilibrium, then lower the temperature while applying microwaves (to most of the surface: you need a channel for the heat to escape the core of the brain; a uniformly hot surface would prevent this). From spike66 at att.net Thu Jan 27 06:18:12 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 22:18:12 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <001001cbbde6$f5b41a60$e11c4f20$@att.net> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> Message-ID: <001201cbbde9$f9dcd520$ed967f60$@att.net> ... On Behalf Of spike ... If a cryonics team were to split the skull front to back to allow slight expansion during the freezing process, there might be a secondary benefit: it would allow insertion of a reflective heat sink between the hemispheres of the brain...spike The way I stated that wasn't clear: I meant cut the skull starting between the eyebrows and go up and over the top and back toward the cervical vertebrae. The heat sink then goes between the hemispheres and protrudes, like when a hipster has a Mohawk. spike From atymes at gmail.com Thu Jan 27 06:54:38 2011 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 26 Jan 2011 22:54:38 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <001101cbbde9$3fb1aef0$bf150cd0$@att.net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <001101cbbde9$3fb1aef0$bf150cd0$@att.net> Message-ID: On Wed, Jan 26, 2011 at 10:12 PM, spike wrote: > Max, shall I desist posting these notions forthwith? ?Plopping this into the > public domain, we run the risk of some yahoo running to the patent office > with it, ja? 1) If some yahoo does run to the patent office, inspired by your post? You've got provable prior art - and as such, if he does get a patent, a case to simply take his patent away from him (and into your pocket) wholesale, or at least invalidate his patent. Your evidence? *These very posts.* 2) Do you seriously believe no one has ever had a thought similar to what you pose before? Getting a patent involves far, FAR more than merely having and stating an idea. 3) If you have the idea, and squelch discussion of it, you will not develop it any further in the near term, guaranteed. Meanwhile, someone else, having been inspired by the same sources but not as paranoid, might develop it and seek a patent. Those who advise secrecy know how to stop leaks and reduce the chances that someone else will develop X. Those equations change drastically when your objective is not, "I want to develop X and get all the money from it", but rather, "I want X to be developed and commercialized ASAP, so that anyone who would like to take advantage of X can do so". I am increasingly seeing that the concept, where one can achieve meaningful benefit from the commercialization of a technology whether or not oneself gained financially from that commercialization, seems to be a new, alien thing to established business structures. But I am also increasingly seeing that I am not alone in seeing technologies where I materially, even financially, benefit even if someone else rakes in the direct monetary profits. From spike66 at att.net Thu Jan 27 06:44:24 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 22:44:24 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> Message-ID: <001401cbbded$a3239cb0$e96ad610$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes ... >...As to microwave heating: have you not noticed that microwaves - as commonly used in ovens, anyway - naturally heat the outside more than the inside, without any fancy setup? Seriously, try cooking a fist sized chunk of meat (chicken, ground beef, whatever) straight from the freezer for, say, 5 minutes... Ja but we are talking two different things. If you mean frozen meat, all bets are off, and ja I agree it thaws outboard first. But we have options with an unfrozen blob of meat or a brain, such as going to a higher frequency, which is a shorter wavelength, and collimating the beam. To gain an intuitive feel for what I mean, put a thick piece of steak in the microwave and note how it has a bad habit of exploding. I had the unfortunate experience last year of cooking a hunk of turkey in the microwave and having it explode about two seconds after I opened the door. Hot bird flesh hit me in the eye. It hurt. We would have a vested interest in seeing that this does not occur with some hapless prole's brain. Particularly if that prole is me. Mine is a fun brain in which to live. >...Since this is the default behavior, why bother with complicated mechanisms, or inserting things that might damage the brain? Immerse in something just above freezing, reach temperature equilibrium, then lower the temperature while applying microwaves (to most of the surface: you need a channel for the heat to escape the core of the brain; a uniformly hot surface would prevent this). Ja, part of what I had in mind originally with the plate is to use it as a heat sink, so the freezing process could actually start inboard and progress outward. You are right, it might not be necessary. But if we use the plate as a microwave reflector, it allows the use of a higher frequency shorter wavelength beam so that the tissue penetration is greater and perhaps more uniform. spike From spike66 at att.net Thu Jan 27 07:14:28 2011 From: spike66 at att.net (spike) Date: Wed, 26 Jan 2011 23:14:28 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <001101cbbde9$3fb1aef0$bf150cd0$@att.net> Message-ID: <001601cbbdf1$d6d14cc0$8473e640$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes ... Subject: Re: [ExI] Help with freezing phenomenon On Wed, Jan 26, 2011 at 10:12 PM, spike wrote: >> Max, shall I desist posting these notions forthwith? ?Plopping this >> into the public domain, we run the risk of some yahoo running to the >> patent office with it, ja? >...1) If some yahoo does run to the patent office, inspired by your post? You've got provable prior art - and as such, if he does get a patent, a case to simply take his patent away from him (and into your pocket) wholesale, or at least invalidate his patent. Your evidence? *These very posts.* Good thanks. I don't want patents in this area. This needs to be open. >...2) Do you seriously believe no one has ever had a thought similar to what you pose before? Getting a patent involves far, FAR more than merely having and stating an idea... The idea of a beam splitter and reflector, with a wavelength of around 8 cm to maximize microwave destructive interference in the center of a hemisphere I suspect has never been patented. The rest of it, ja, likely a hundred previous yahoos have tried to patent that stuff. >... But I am also increasingly seeing that I am not alone in seeing technologies where I materially, even financially, benefit even if someone else rakes in the direct monetary profits. May we all benefit. And just to add another idea without going even more than I already have over the voluntary daily posting limit: is it not absurd that trains go woooonk wooooonk wonk woooooonk to warn traffic where streets cross railroads? Could they not simply have speakers set up at the roads that do the woonk wonk thing when the train is coming, so that it doesn't need to make all that racket that can be heard for miles? And why a horn, instead of a human voice saying "Heads up prole, train coming!" Is the train horn a bad solution to an engineering problem or what? What happened, did some yahoo patent the idea of making train warning horns next to the road instead of on board the train? I don't want to see cryonics technology get trapped in blind alleys because of intellectual property law. It's too important. spike From possiblepaths2050 at gmail.com Thu Jan 27 07:45:20 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 27 Jan 2011 00:45:20 -0700 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <001601cbbdf1$d6d14cc0$8473e640$@att.net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <001101cbbde9$3fb1aef0$bf150cd0$@att.net> <001601cbbdf1$d6d14cc0$8473e640$@att.net> Message-ID: Spike, considering Max More is the new CEO of Alcor and that you are a recently retired engineer bursting with great ideas for the organization, why don't you volunteer there, or even apply for a paid position with them? John : ) On 1/27/11, spike wrote: > > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes > ... > Subject: Re: [ExI] Help with freezing phenomenon > > On Wed, Jan 26, 2011 at 10:12 PM, spike wrote: >>> Max, shall I desist posting these notions forthwith? ?Plopping this >>> into the public domain, we run the risk of some yahoo running to the >>> patent office with it, ja? > >>...1) If some yahoo does run to the patent office, inspired by your post? > You've got provable prior art - and as such, if he does get a patent, a case > to simply take his patent away from him (and into your pocket) wholesale, or > at least invalidate his patent. Your evidence? *These very posts.* > > Good thanks. I don't want patents in this area. This needs to be open. > >>...2) Do you seriously believe no one has ever had a thought similar to > what you pose before? Getting a patent involves far, FAR more than merely > having and stating an idea... > > The idea of a beam splitter and reflector, with a wavelength of around 8 cm > to maximize microwave destructive interference in the center of a hemisphere > I suspect has never been patented. The rest of it, ja, likely a hundred > previous yahoos have tried to patent that stuff. > >>... But I am also increasingly seeing that I am not alone in seeing > technologies where I materially, even financially, benefit even if someone > else rakes in the direct monetary profits. > > May we all benefit. > > And just to add another idea without going even more than I already have > over the voluntary daily posting limit: is it not absurd that trains go > woooonk wooooonk wonk woooooonk to warn traffic where streets cross > railroads? Could they not simply have speakers set up at the roads that do > the woonk wonk thing when the train is coming, so that it doesn't need to > make all that racket that can be heard for miles? And why a horn, instead > of a human voice saying "Heads up prole, train coming!" Is the train horn a > bad solution to an engineering problem or what? What happened, did some > yahoo patent the idea of making train warning horns next to the road instead > of on board the train? > > I don't want to see cryonics technology get trapped in blind alleys because > of intellectual property law. It's too important. > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From eugen at leitl.org Thu Jan 27 13:44:43 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 27 Jan 2011 14:44:43 +0100 Subject: [ExI] mass transit again In-Reply-To: <20110127012011.GA5856@ofb.net> References: <20110120220122.d32794d095cdfcc0018508d9c136b552.d969e76a73.wbe@email09.secureserver.net> <008f01cbb938$65860460$30920d20$@att.net> <20110127012011.GA5856@ofb.net> Message-ID: <20110127134443.GY23560@leitl.org> On Wed, Jan 26, 2011 at 05:20:11PM -0800, Damien Sullivan wrote: > It strikes me yet again that extropians and SF fans can be great > dreamers about space and the future, and quite parochial down on Earth. Careful with that overbroad brush thar, brutha. From darren.greer3 at gmail.com Thu Jan 27 14:44:24 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 27 Jan 2011 10:44:24 -0400 Subject: [ExI] sexy news anchors distracting In-Reply-To: <002301cbbdaa$0d196850$274c38f0$@att.net> References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <002301cbbdaa$0d196850$274c38f0$@att.net> Message-ID: >Hey wait a minute, You found her distracting? < No, not the female. There was a male counterpart version. I remember once tuning in to the naked chef, but found it to be false advertising. 2011/1/26 spike > > > > > *?* *On Behalf Of *Darren Greer > *Subject:* Re: [ExI] sexy news anchors distracting > > > > Spike wrote: > > > > >>This one says sexy news anchors are distracting, when of course exactly > the opposite is true..< > > > > >?There was a short-lived program called "Naked News" on some network in > the U.S. a dozen years ago with anchors delivering lead-ins in nothing but > their bare skin? > > > > WHAT? Short lived? Did it die? Damn this IS a sad day. I assumed Nekkid > News was still around. I get clips in the email from friends on a regular > basis. I suppose if I listened to the actual story I might be able to > determine the approximate time frame. It is hard to do however. > > > > >? Saw it once. VERY distracting. In a good way? > > > > Hey wait a minute, You found her distracting? {8^D > > > > Darren I was having lunch with a colleague a few years ago. Our waitress > was drop dead gorgeous. We were both admiring her. I said, ?Hey pal, keep > your eyes in your head, she?s my orientation, not yours.? He said, ?I?m > only gay, not dead.? {8^D > > > > I was laughing so hard at that comeback I could scarcely finish my lunch > for fear of beer spewing out my nose. > > > > >? Anything that takes your mind off the murderous trivia of the evening > news is jake with me. d. > > > > Jake who? She told me her name was Jill on our last date, but it didn?t > get far enough for me to determine if it was really Jake in a very > convincing disguise. I wasn?t distracted or anything: > > > > > http://www.google.com/imgres?imgurl=http://style.popcrunch.com/wp-content/uploads/2008/09/27283.jpg&imgrefurl=http://www.ourpoliticsblog.com/huhu/fox-10-news-anchors.html&usg=__i4GhVAJTxpAajrvK_CfF_DONClo=&h=600&w=800&sz=216&hl=en&start=0&sig2=MyzbIyuzCslHW9P1FR-t5A&zoom=1&tbnid=vrIS1twJqLXZ2M:&tbnh=151&tbnw=188&ei=7qFATa7iPIWCsQOfnYCyCA&prev=/images%3Fq%3Dfox%2Bnews%2Banchors%26um%3D1%26hl%3Den%26sa%3DX%26biw%3D1054%26bih%3D685%26tbs%3Disch:1&um=1&itbs=1&iact=hc&vpx=463&vpy=319&dur=9720&hovh=194&hovw=259&tx=104&ty=100&oei=7qFATa7iPIWCsQOfnYCyCA&esq=1&page=1&ndsp=16&ved=1t:429,r:8,s:0 > > > > It is difficult to understand why we still need news in either hard copy or > audio copy. Why? We have computers now, what is the other for? How does > it help us? Audio/video news has evolved into a form best described as the > softest core pornography known, the only form of pornography still > considered completely legitimate even by the church ladies. > > > > It?s a hell of a world we live in, ja? > > > > spike > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Jan 27 14:49:49 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 27 Jan 2011 07:49:49 -0700 Subject: [ExI] Help with freezing phenomenon Message-ID: On Thu, Jan 27, 2011 at 5:00 AM, Jeff Davis wrote: snip > > That should get you started. ?Now here's my question. > snip > I ask because I was expecting the reverse. ?Somehow I got the notion > that the outer shell of ice (conventional freezing) would slow the > cooling rate, compared to the "shell" of fluid tissue kept unfrozen > until the end point of the CAS process. > > ****************************************** > > Water isn't tissue. ?And convection, ...what about convection?! > > Make believe there is no such thing as convection. > > ?I know, I know. The current practice at Alcor is to load tissue with so much cryoprotective solution and ice blockers that ice never forms at all. Alcor uses burr holes in the skull to observe the brain for swelling. We can reverse swelling in some cases by increasing the ramp of cryoprotective addition (which dehydrates). It all depends on how much time/temperature the patient was subjected to before staring the cryoprotective ramp. Really fresh, cold brains don't swell at all. Convection just doesn't apply to something like a slab of meat. There is no path for the liquid to circulate. Gross reality check. Long ago when I was working on the very first computer controlled freezing system for heads, Hugh Hixon and I needed a test heat load for it. So we filled a head sized plastic bag with water and froze that. Boy what a mess. It shell froze, then, as more inside froze, it cracked, big cracks like half a inch wide. Inside looked like crushed ice. Hugh Hixon had bags of the then current cryoprotective perfusate around so the next test run we used those. The stuff didn't actually become a solid all the way down to dry ice temperature. It looked like milk and got stiffer than putty. Keith PS. The system used a pump for silicon oil and a solenoid that switched the flow of oil over dry ice or bypassed the dry ice to track the desired temperature descent. It evolved over time to what is used today. From spike66 at att.net Thu Jan 27 15:56:15 2011 From: spike66 at att.net (spike) Date: Thu, 27 Jan 2011 07:56:15 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <001101cbbde9$3fb1aef0$bf150cd0$@att.net> <001601cbbdf1$d6d14cc0$8473e640$@att.net> Message-ID: <003201cbbe3a$bb21b480$31651d80$@att.net> >...Spike, considering Max More is the new CEO of Alcor and that you are a recently retired engineer bursting with great ideas for the organization, why don't you volunteer there, or even apply for a paid position with them? John : ) You are too kind Johnny. I live in Taxifornia. Alcor is in Arizona. Can't sell my house. Durn near couldn't give it away. I did have an idea however, one that involves you. Alcor has volunteer teams I understand, or had at one time. These would attend local freezings (what are they called?) and assist the freezers (what are they called?) in whatever they needed, bring sandwiches, fetch ice, whatever. I would be glad to volunteer for that, but it occurred to me there is another need an Alcor volunteer could carry out, and you could do this much better than I could. While we need gofers to support the icers, we need a good sympathetic type to support the grieving family, coach them thru a time when they just lost grandma and all these geeks are here doing these strange things to the recently deceased, things that need to be done right there on site quickly. They need an emotional coach, they need some calm person with documentation to talk to the constables should they come rushing in with sidearms drawn, wanting to know what the hell is going on, someone who knows exactly what Alcor does and why, some warmhearted softy person who can sit with the family and just hug and cry. This whole freezing heads business is two parts technology, one part emotion. I would be willing to do that for the SF Bay area cryo-corpses' families (what are they called? We need a cryonics terminology dictionary somewhere. Let's start one here.) John I know a lot of ExI types from way back. If I were to choose from among them a person most suited for that kind of volunteer work, it would be you pal. If I had to choose from among the ExI crowd someone to just sit with me at my mother's death bed, I choose you. spike From stefano.vaj at gmail.com Thu Jan 27 16:33:10 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 27 Jan 2011 17:33:10 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <131505.6323.qm@web114402.mail.gq1.yahoo.com> References: <131505.6323.qm@web114402.mail.gq1.yahoo.com> Message-ID: On 25 January 2011 15:16, Ben Zaiboc wrote: > Atheism makes a reasonable assumption, based on the available evidence (both the logical absurdities and the lack of physical evidence). ?That's not a Belief. Why should we feel bound by "reasonable assumptions"? I consider it my duty to disbelieve the existence of the supreme entity of monotheistic religions as a matter of faith and on moral grounds. ;-) I do not see how the fans of those religions could ever object to this. -- Stefano Vaj From natasha at natasha.cc Thu Jan 27 16:34:38 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 27 Jan 2011 10:34:38 -0600 Subject: [ExI] Volitional Longevity In-Reply-To: <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> Message-ID: Is anyone ready my emails? If not, just let me knw and I'll go elsewhere, but it would be sweet to get a reply to posts. Many thanks, Natasha Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of natasha at natasha.cc Sent: Wednesday, January 26, 2011 4:39 PM To: extropy-chat at lists.extropy.org Subject: [ExI] Volitional Longevity I recall a couple of weeks ago, someone was posting about a phrase on DIY health and life extension / transmortality. The phrase had the world "volitional" or "volition" in it. Do you all remember this phrase and who use it? url? Many thanks, Natasha _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From stefano.vaj at gmail.com Thu Jan 27 15:34:57 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 27 Jan 2011 16:34:57 +0100 Subject: [ExI] Limiting factors of intelligence explosion speeds In-Reply-To: <369405.14001.qm@web114404.mail.gq1.yahoo.com> References: <369405.14001.qm@web114404.mail.gq1.yahoo.com> Message-ID: On 24 January 2011 16:39, Ben Zaiboc wrote: > Stefano Vaj asked: > >>> Lastly, there is the fact that an AGI could communicate with its sisters on >>> high-bandwidth channels, as I mentioned in my essay. ?We cannot do that. ?It >>> would make a difference. > >> Really can't a fyborg do that? Aren't we already doing that? :-/ > > Absolutely not! All the examples provided concerned communication bandwith and computing performance at a given task. This can of course be solved by eliminating bottlenecks (say, by dropping altogether carbon-based computation or developing better interfaces thereto). AND/OR by moving computing tasks where they really belong, adopting higher and higher level languages to communicate with the parts that suffers the most from such bottlenecks. You can have a man who can fly a helicopter to Cincinnati, a computer who decides to visit Cincinnati, or a man telling a computer in natural language to bring him to Cincinnati. The latter does not know how to pilot a helicopter? What else is new? We have been routinely doing things the working thereof we do not really understand for a long time before any digital computer was born. And of course we are doing that more and more since. >From a system-wide perspective, the features of the whole system itself remain undistinguishable. This is why IMHO AGIs or a singularity are not "bound to happen", nor imply any especial "rapture" or "doom" scenarios, but are rather things worth fighting for if one thinks that the will of overcoming themselves is the only thing that make humans of any interest. -- Stefano Vaj From darren.greer3 at gmail.com Thu Jan 27 16:51:19 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 27 Jan 2011 12:51:19 -0400 Subject: [ExI] Volitional Longevity In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> Message-ID: I'm reading Natasha. Generally if I don't reply it means I agree. It's only when we post something egregious that everyone jumps. :) d. On Thu, Jan 27, 2011 at 12:34 PM, Natasha Vita-More wrote: > Is anyone ready my emails? If not, just let me knw and I'll go elsewhere, > but it would be sweet to get a reply to posts. > > Many thanks, > Natasha > > > Natasha Vita-More > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > natasha at natasha.cc > Sent: Wednesday, January 26, 2011 4:39 PM > To: extropy-chat at lists.extropy.org > Subject: [ExI] Volitional Longevity > > I recall a couple of weeks ago, someone was posting about a phrase on DIY > health and life extension / transmortality. The phrase had the world > "volitional" or "volition" in it. > > Do you all remember this phrase and who use it? url? > > Many thanks, > Natasha > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Jan 27 16:38:23 2011 From: spike66 at att.net (spike) Date: Thu, 27 Jan 2011 08:38:23 -0800 Subject: [ExI] fwd: Limiting factors of intelligence explosion speeds Message-ID: <003601cbbe40$9e1805f0$da4811d0$@att.net> Forwarded message: From: Omar Rahman Original message follows: From: Anders Sandberg One of the things that struck me during our Winter Intelligence workshop on intelligence explosions was how confident some people were about the speed of recursive self-improvement of AIs, brain emulation collectivies or economies. Some thought it was going to be fast in comparision to societal adaptation and development timescales (creating a winner takes all situation), some thought it would be slow enough for multiple superintelligent agents to emerge. This issue is at the root of many key questions about the singularity (one superintelligence or many? how much does friendliness matter?) It would be interesting to hear this list's take on it: what do you think is the key limiting factor for how fast intelligence can amplify itself? Some factors that have been mentioned in past discussions: Economic growth rate Investment availability Gathering of empirical information (experimentation, interacting with an environment) Software complexity 'Software complexity' stands out to me as the big limiting factor. Assuming it applies at all to machine intelligences, Godel's incompleteness theorem would seem to imply that once this thing starts off it can't just go forward with some 'ever greater intelligence' recipe. Add to this the cognitive load of managing a larger and larger system and the system will have to optimize and, oddly enough, 'automate' subprocesses; much as we don't consciously breath, but we can control it if we wish. Once this thing hits it's 'Godel Limit' if it wishes to progress further it will be forced into the 'gathering of empirical information', at this stage it is unknown how long it will take for a new axiom to be discovered. Hardware demands vs. available hardvare Bandwidth Lightspeed lags Clearly many more can be suggested. But which bottlenecks are the most limiting, and how can this be ascertained? Many people seem to assume that greater intelligence is simple a matter of 'horsepower' or more processing units or what have you. My analogy would be a distribution network where you get more and more carrying capacity as you add more trucks but once you reach the ocean adding more trucks won't help. Unless you fill the ocean with trucks I suppose. =D Does anyone care to address my main assumption? Does Godel's incompleteness theorem apply? Regards, Omar Rahman -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Jan 27 17:12:45 2011 From: pharos at gmail.com (BillK) Date: Thu, 27 Jan 2011 17:12:45 +0000 Subject: [ExI] Volitional Longevity In-Reply-To: <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> Message-ID: On Wed, Jan 26, 2011 at 10:38 PM, natasha wrote: > I recall a couple of weeks ago, someone was posting about a phrase on DIY > health and life extension / transmortality. The phrase had the world > "volitional" or "volition" in it. > > Do you all remember this phrase and who use it? ?url? > > I think you might be thinking of Anders' post here: and your reply here: BillK From spike66 at att.net Thu Jan 27 17:02:55 2011 From: spike66 at att.net (spike) Date: Thu, 27 Jan 2011 09:02:55 -0800 Subject: [ExI] Volitional Longevity In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> Message-ID: <004e01cbbe44$0a923a40$1fb6aec0$@att.net> I recall a couple of weeks ago, someone was posting about a phrase on DIY health and life extension / transmortality. The phrase had the world "volitional" or "volition" in it... Natasha Vita-More Read it, don't recall seeing the term volitional anywhere. Spike From natasha at natasha.cc Thu Jan 27 19:02:34 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 27 Jan 2011 14:02:34 -0500 Subject: [ExI] Volitional Longevity In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> Message-ID: <20110127140234.4tqjwofntww0gggo@webmail.natasha.cc> BINGO!!!!!!! Thank you! I needed this phrase for my dissertation asap! N Quoting BillK : > On Wed, Jan 26, 2011 at 10:38 PM, natasha wrote: >> I recall a couple of weeks ago, someone was posting about a phrase on DIY >> health and life extension / transmortality. The phrase had the world >> "volitional" or "volition" in it. >> >> Do you all remember this phrase and who use it? ?url? >> >> > > > I think you might be thinking of Anders' post here: > > and your reply here: > > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From msd001 at gmail.com Thu Jan 27 20:19:50 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 27 Jan 2011 15:19:50 -0500 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> Message-ID: On Thu, Jan 27, 2011 at 12:04 AM, spike wrote: > Perhaps Max's staff might want to join our efforts in finding a better way > to preserve sushi and valuable brains. > Sounds like you are preparing for a Zombie Apocalypse. I'm not exactly comfortable with "sushiable" being applied so casually to brain preservation. I expect Max would be more.. professional. :) From moulton at moulton.com Thu Jan 27 18:44:41 2011 From: moulton at moulton.com (F. C. Moulton) Date: Thu, 27 Jan 2011 10:44:41 -0800 Subject: [ExI] Volitional Longevity In-Reply-To: References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> Message-ID: <4D41BD19.4050903@moulton.com> I suspect that everyone was like me and was busy and assumed the persons who wrote the messages in question would send them to you. Since that appears to have not been the case I searched and found four messages similar to what you describe. I have forwarded each message to you. Fred Natasha Vita-More wrote: > Is anyone ready my emails? If not, just let me knw and I'll go elsewhere, > but it would be sweet to get a reply to posts. > > Many thanks, > Natasha > > > Natasha Vita-More > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > natasha at natasha.cc > Sent: Wednesday, January 26, 2011 4:39 PM > To: extropy-chat at lists.extropy.org > Subject: [ExI] Volitional Longevity > > I recall a couple of weeks ago, someone was posting about a phrase on DIY > health and life extension / transmortality. The phrase had the world > "volitional" or "volition" in it. > > Do you all remember this phrase and who use it? url? > > Many thanks, > Natasha > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From spike66 at att.net Thu Jan 27 21:18:48 2011 From: spike66 at att.net (spike) Date: Thu, 27 Jan 2011 13:18:48 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> Message-ID: <009b01cbbe67$ca32b1e0$5e9815a0$@att.net> ... On Behalf Of Mike Dougherty Subject: Re: [ExI] Help with freezing phenomenon On Thu, Jan 27, 2011 at 12:04 AM, spike wrote: >> Perhaps Max's staff might want to join our efforts in finding a better way to preserve sushi and valuable brains. >Sounds like you are preparing for a Zombie Apocalypse. I'm not exactly comfortable with "sushiable" being applied so casually to brain preservation. I expect Max would be more.. professional. :) Ja but of course Max *IS* more...professional, specifically in cryonics. I am merely a potential volunteer. That being said, Jeff's original thought provoking question was based on the preservation of valuable meat, in this case fish. I went off on a wacky tangent and never did address the original question. We should revisit that, and apologies to Jeff for hijacking his thread. > I'm not exactly comfortable with "sushiable" being applied so casually to brain preservation... Sushi is a good example, because sushi is expensive for a reason: raw fish must be perfect. It must be visibly perfect. Any discolored, bruised, or in any way imperfect fish is not sushiable. Regarding the association of brain preservation and sushi, keep in mind that much of the task of cryonics, if not most of the task, is actually marketing. If people knew and understood the concept of cryonics, Alcor would have more business than it could handle. Even with the current or higher price structure, there are *plenty* of proles who would sign up for that in a heartbeat, if they properly understood what was being done there. Consider for example, one completely irrelevant aspect of the mechanics of stacking patients in the dewar. Perhaps for convenience, some of the patients were oriented head down. This is completely irrelevant which direction the 1 G field is oriented, but from a marketing perspective it was a disaster. We here can scarcely imagine why every article on Ted Williams mentioned the fact that he was upside down in that dewar. What the hell difference does that make to a frozen corpse? Do let me assure you, it matters in the world of PR and marketing. Alcor has a far bigger task than the technical aspects of cryonics, in effectively marketing the idea. I am taking on some of the technical side, the far easier part. My use of the sushi comparison is an example of bad marketing, and thanks for pointing it out Mike. If you are squicked by that, then other potential customers might be squicked too. No need to explain it, it just is. spike From msd001 at gmail.com Thu Jan 27 23:03:33 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 27 Jan 2011 18:03:33 -0500 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <009b01cbbe67$ca32b1e0$5e9815a0$@att.net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <009b01cbbe67$ca32b1e0$5e9815a0$@att.net> Message-ID: On Thu, Jan 27, 2011 at 4:18 PM, spike wrote: > My use of the sushi comparison is an example of bad marketing, and thanks > for pointing it out Mike. ?If you are squicked by that, then other potential > customers might be squicked too. ?No need to explain it, it just is. I'm fine with the concept of frozen heads in a tank. It was the mental flip-flop (pun intended) from brains to fish and back. From the verbification of sushi to sushiable we're likely to get to sashimiable and though that might be ideal for scanning in a later analog-to-digital conversion, I keep imagining the plates getting confused between customers. I might wake up with the only memory of elementary school being a lot of swimming around and avoiding orcas ...and where my recollection of learning to ride a bicycle end up is too ghastly to consider. :) From jonkc at bellsouth.net Fri Jan 28 06:01:15 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 28 Jan 2011 01:01:15 -0500 Subject: [ExI] Help with freezing phenomenon. In-Reply-To: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> Message-ID: <08D5A659-45EC-48CF-A363-75273967681E@bellsouth.net> On Jan 27, 2011, at 12:04 AM, spike wrote: > Then a second synchronized and 180 degrees out of phase microwave source could be injected from the other direction. [...] We could experiment with this theory by scavenging a couple of magnetrons > from two identical junky old microwave ovens. I don't see how that could work with a magnetron, I think you'd need a coherent beam from a microwave LASER, a MASER; although a Klystron might be good enough. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 28 06:54:27 2011 From: spike66 at att.net (spike) Date: Thu, 27 Jan 2011 22:54:27 -0800 Subject: [ExI] Help with freezing phenomenon. In-Reply-To: <08D5A659-45EC-48CF-A363-75273967681E@bellsouth.net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <08D5A659-45EC-48CF-A363-75273967681E@bellsouth.net> Message-ID: <001801cbbeb8$34f7b520$9ee71f60$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Subject: Re: [ExI] Help with freezing phenomenon. On Jan 27, 2011, at 12:04 AM, spike wrote: Then a second synchronized and 180 degrees out of phase microwave source could be injected from the other direction. [...] We could experiment with this theory by scavenging a couple of magnetrons from two identical junky old microwave ovens. I don't see how that could work with a magnetron, I think you'd need a coherent beam from a microwave LASER, a MASER; although a Klystron might be good enough.John K Clark Ja agreed, but I now think there are better ways to accomplish the same thing, which is to avoid microcracking the outer layers of the brain by inducing it to freeze from the inside first. This was I think the spirit of Jeff Davis' original question regarding the thermal conductivity of water vs. ice. Suppose we supercooled the brain slightly to about -5C, perhaps by immersion of the patient's head in a saline solution or a mixture of water and ethylene glycol, or blood for instance. When the head is uniformly slightly supercooled, we put the container with the head in solution on a turntable and spin at some moderate rate of say ~100 RPM. The centrifugal force would cause the pressure at the center of the brain to be lower than at the periphery, so the freezing initiates there. The heat of fusion from freezing is steadily drawn out of the solution, and the ice forms at the interface between frozen and nonfrozen brain tissue, so the brain freezes from inside to out. No shell freezing and cracking that way. The freezing point is lower toward the outer regions of the brain since it is under slightly more pressure, but not a lot of pressure. Just enough to depress the freezing point a couple degrees, so that it freezes last at a temperature of perhaps -8C. For high end Ted Williams-ish patients who insist on full body cryonics, that procedure could actually be used without severing the head. A full body could be immersed and placed on a turntable, the brain frozen inside out. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Jan 28 09:34:33 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 28 Jan 2011 02:34:33 -0700 Subject: [ExI] Volitional Longevity In-Reply-To: <4D41BD19.4050903@moulton.com> References: <00ac01cbbda0$90c57db0$b2507910$@att.net> <20110126173853.mdnxf6rtmo4s0c00@webmail.natasha.cc> <4D41BD19.4050903@moulton.com> Message-ID: Natasha wrote: Is anyone ready my emails? If not, just let me know and I'll go elsewhere, but it would be sweet to get a reply to posts. Many thanks, Natasha >>> We must always remember that Natasha is our Extropian Queen, and so we are her unruly bunch of loyal subjects! "Providing leadership to transhumanists is often much like herding cats!" John : ) On 1/27/11, F. C. Moulton wrote: > > I suspect that everyone was like me and was busy and assumed the persons > who wrote the messages in question would send them to you. Since that > appears to have not been the case I searched and found four messages > similar to what you describe. I have forwarded each message to you. > > Fred > > Natasha Vita-More wrote: >> Is anyone ready my emails? If not, just let me knw and I'll go >> elsewhere, >> but it would be sweet to get a reply to posts. >> >> Many thanks, >> Natasha >> >> >> Natasha Vita-More >> >> -----Original Message----- >> From: extropy-chat-bounces at lists.extropy.org >> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of >> natasha at natasha.cc >> Sent: Wednesday, January 26, 2011 4:39 PM >> To: extropy-chat at lists.extropy.org >> Subject: [ExI] Volitional Longevity >> >> I recall a couple of weeks ago, someone was posting about a phrase on DIY >> health and life extension / transmortality. The phrase had the world >> "volitional" or "volition" in it. >> >> Do you all remember this phrase and who use it? url? >> >> Many thanks, >> Natasha >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From amara at kurzweilai.net Fri Jan 28 09:32:06 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Fri, 28 Jan 2011 01:32:06 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <009b01cbbe67$ca32b1e0$5e9815a0$@att.net> Message-ID: <04a301cbbece$3ad8ac90$b08a05b0$@net> Radio-Frequency and ELF Electromagnetic Energies: A Handbook for Health (readable here) is a useful source for this discussion: http://books.google.com/books?id=lOMWHnZ4h_cC &pg=PA51&lpg=PA51&dq=absorption+of+microwaves+in+human+head+by+frequency&sou rce=bl&ots=eIIqaYtuIt&sig=ZuH3xtZqiExb14lbx_Dzx9HgE98&hl=en&ei=_YdCTbrCDoK6s AOs9LyPCg&sa=X&oi=book_result&ct=result&resnum=5&ved=0CD8Q6AEwBA#v=onepage&q =absorption%20of%20microwaves%20in%20human%20head%20by%20frequency&f=false. Page 30 is a summary of penetration depth vs. frequency and absorbed energy (measured as SAR, or Specific Absorption Rate). Frequency-sweeping between 10 MHz (deep penetration) and 100 GHz (limited to ~ mm range) could achieve adjustable, homogenous gross heat distribution (measured as SAR, or Specific Absorption Rate) radially, but isometric near-field antenna design to achieve homogenous heat distribution is left as an exercise to the reader :). -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 16908 bytes Desc: not available URL: From eugen at leitl.org Fri Jan 28 10:56:02 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 28 Jan 2011 11:56:02 +0100 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <009b01cbbe67$ca32b1e0$5e9815a0$@att.net> Message-ID: <20110128105602.GK23560@leitl.org> On Thu, Jan 27, 2011 at 06:03:33PM -0500, Mike Dougherty wrote: > I'm fine with the concept of frozen heads in a tank. It was the The proper term is patients. Neuropatients, specifically. There are very good reasons to avoid using other terms. Also, you would want to vitrify (which is not trivial, by the way, and beware of people who claim they do vitrification without proof). > mental flip-flop (pun intended) from brains to fish and back. From > the verbification of sushi to sushiable we're likely to get to > sashimiable and though that might be ideal for scanning in a later > analog-to-digital conversion, I keep imagining the plates getting > confused between customers. I might wake up with the only memory of > elementary school being a lot of swimming around and avoiding orcas > ...and where my recollection of learning to ride a bicycle end up is > too ghastly to consider. :) -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From possiblepaths2050 at gmail.com Fri Jan 28 11:38:15 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 28 Jan 2011 04:38:15 -0700 Subject: [ExI] interesting new NOVA episodes Message-ID: I've gotten back into the habit of watching this wonderful science series. Can we live forever? http://www.pbs.org/wgbh/nova/body/can-we-live-forever.html The power of small... http://www.pbs.org/wgbh/nova/tech/making-stuff-smaller.html John : ) From possiblepaths2050 at gmail.com Fri Jan 28 11:44:23 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 28 Jan 2011 04:44:23 -0700 Subject: [ExI] Transcendent Man documentary finally available for purchase! Message-ID: I have been waiting ages (or at least it feels like it) for this to be released! http://www.transcendentman.com/?utm_source=StoreFBTab1&utm_medium=facebook%2Bcpc&utm_campaign=StoreFBTab1 John : ) From possiblepaths2050 at gmail.com Fri Jan 28 12:06:08 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 28 Jan 2011 05:06:08 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits Message-ID: Does this mean we are possibly moving ahead of schedule toward AGI? http://www.electronista.com/articles/11/01/21/oxford.u.makes.headway.toward.quantum.computer/ John From rpwl at lightlink.com Fri Jan 28 13:18:46 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 28 Jan 2011 08:18:46 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits In-Reply-To: References: Message-ID: <4D42C236.2020203@lightlink.com> John Grigg wrote: > Does this mean we are possibly moving ahead of schedule toward AGI? > > http://www.electronista.com/articles/11/01/21/oxford.u.makes.headway.toward.quantum.computer/ The answer to your question is NO. Getting toward AGI is a question of the theory, and the software. The relevance of hardware advances like this is completely unknown until a working design can be supplied. We are a long way away from AGI, unless people start to wake up to the farcical state of affairs in artificial intelligence at the moment. Richard Loosemore From eugen at leitl.org Fri Jan 28 13:52:15 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 28 Jan 2011 14:52:15 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits In-Reply-To: <4D42C236.2020203@lightlink.com> References: <4D42C236.2020203@lightlink.com> Message-ID: <20110128135215.GV23560@leitl.org> On Fri, Jan 28, 2011 at 08:18:46AM -0500, Richard Loosemore wrote: > John Grigg wrote: >> Does this mean we are possibly moving ahead of schedule toward AGI? >> >> http://www.electronista.com/articles/11/01/21/oxford.u.makes.headway.toward.quantum.computer/ > > The answer to your question is NO. > > Getting toward AGI is a question of the theory, and the software. Do you need theory for your brain's operation in order to operate it? What kind of software are you running right now, and where exactly is it separate from hardware? Where is the magical time period where state becomes embodied as permanent structure? > The relevance of hardware advances like this is completely unknown until > a working design can be supplied. If you try to track what a given piece of neocortex is doing in current hardware you will realize that you need a lot of crunch. > We are a long way away from AGI, unless people start to wake up to the > farcical state of affairs in artificial intelligence at the moment. Finally something we can agree on. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Fri Jan 28 15:28:27 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 28 Jan 2011 10:28:27 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits In-Reply-To: <20110128135215.GV23560@leitl.org> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> Message-ID: <4D42E09B.8020308@lightlink.com> Eugen Leitl wrote: > On Fri, Jan 28, 2011 at 08:18:46AM -0500, Richard Loosemore wrote: >> John Grigg wrote: >>> Does this mean we are possibly moving ahead of schedule toward AGI? >>> >>> http://www.electronista.com/articles/11/01/21/oxford.u.makes.headway.toward.quantum.computer/ >> The answer to your question is NO. >> >> Getting toward AGI is a question of the theory, and the software. > > Do you need theory for your brain's operation in order to > operate it? What kind of software are you running right now, > and where exactly is it separate from hardware? Where is > the magical time period where state becomes embodied as > permanent structure? You are referring to the idea that building an AGI is about "simply" duplicating the human brain? And that therefore the main obstacle is having the hardware to do that? This is an approach that might be called "blind replication". Copying without understanding. I tried to do that once, when I was a kid. I built an electronic circuit using a published design, but with no clue how the components worked, or how the system functioned. It turned out that there was one small problem, somewhere in my implementation. Probably just the one. And so the circuit didn't work. And since I was blind to the functionality, there was absolutely nothing I could do about it. I had no idea where to look to fix the problem. To do AGI you need to understand what you are building. The idea of successfully replicating a system as fantastically complex as the human brain, without first sorting out the FUNCTIONALITY -- i.e. the software -- is a hollow dream. (Not to mention that virtually nobody in the AGI community is actually trying to do that right now. WBE is done by neuroscientists who seem not to have thought about these issues much, and they don't call what they do "AGO") >> The relevance of hardware advances like this is completely unknown until >> a working design can be supplied. > > If you try to track what a given piece of neocortex is doing > in current hardware you will realize that you need a lot of > crunch. Track what the neocortex is doing? I am doing that. That is my research.... except that I am doing it at the high-level, functional level. What I am trying to do is understand how the neocortex works, not how the signals are chasing each other around. Those are two different things, like the difference between electronic engineering and software engineering. And, so far, it looks as though the cortex may be playing a functional role that can be implemented with a few orders of magnitude less hardware than the brain uses. >> We are a long way away from AGI, unless people start to wake up to the >> farcical state of affairs in artificial intelligence at the moment. > > Finally something we can agree on. Well, we agree on this (as you probably know) for completely different reasons. At least I think we do. If you are saying this because you agree with the critique in my complex systems paper, I will be a pleasantly surprised person today. Richard Loosemore From jonkc at bellsouth.net Fri Jan 28 15:23:16 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 28 Jan 2011 10:23:16 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <20110128135215.GV23560@leitl.org> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> Message-ID: On Jan 28, 2011, at 8:52 AM, Eugen Leitl wrote: >> Richard Loosemore wrote: > >> We are a long way away from AGI, unless people start to wake up to the >> farcical state of affairs in artificial intelligence at the moment. > > Finally something we can agree on. Have you seen this? http://www.youtube.com/watch?v=WFR3lOm_xhE This was just a test run, there is supposed to be a 3 day competition between the 2 all time best human champions and the machine sometime in February. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 28 15:39:48 2011 From: spike66 at att.net (spike) Date: Fri, 28 Jan 2011 07:39:48 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <04a301cbbece$3ad8ac90$b08a05b0$@net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <009b01cbbe67$ca32b1e0$5e9815a0$@att.net> <04a301cbbece$3ad8ac90$b08a05b0$@net> Message-ID: <002701cbbf01$98dd8d00$ca98a700$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Amara D. Angelica . Subject: Re: [ExI] Help with freezing phenomenon Radio-Frequency and ELF Electromagnetic Energies: A Handbook for Health (readable here) is a useful source for this discussion: http://books.google.com/books?id=lOMWHnZ4h_cC &pg=PA51&lpg=PA51&dq=absorption+of+microwaves+in+human+head+by+frequency&sou rce=bl&ots=eIIqaYtuIt&sig=ZuH3xtZqiExb14lbx_Dzx9HgE98&hl=en&ei=_YdCTbrCDoK6s AOs9LyPCg&sa=X&oi=book_result&ct=result&resnum=5&ved=0CD8Q6AEwBA#v=onepage&q =absorption%20of%20microwaves%20in%20human%20head%20by%20frequency&f=false. Page 30 is a summary of penetration depth vs. frequency and absorbed energy. Excellent! Thanks Amara. I am thinking of alternatives to microwave temperature control as a means of freezing a brain in a controlled non-destructive manner. A derivative of Keith's idea is to remove tissue from the nasal cavity and start the freezing process from the top of the head. That way the brain tissue can expand downward into partially evacuated nasal cavity. I am actually leaning more toward that notion than microwave process control. It also obviates splitting the skull. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Jan 28 16:01:23 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 28 Jan 2011 11:01:23 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> Message-ID: <4D42E853.50706@lightlink.com> John Clark wrote: > On Jan 28, 2011, at 8:52 AM, Eugen Leitl wrote: > >>> Richard Loosemore wrote: >> >>> We are a long way away from AGI, unless people start to wake up to the >>> farcical state of affairs in artificial intelligence at the moment. >> >> Finally something we can agree on. > > Have you seen this? > > http://www.youtube.com/watch?v=WFR3lOm_xhE > > This was just a test run, there is supposed to be a 3 day competition > between the 2 all time best human champions and the machine sometime in > February. Yes, but do you have any idea how trivial this is? The IBM computer playing Jeopardy is just a glorified version of Winograd's SHRDLU. With enough information it can home in on answers to simple questions. Doing that kind of stuff is like winning a barellized fish-shooting context: if you have a big enough encyclopaedia in there, and you do a fast enough search, you get near to the relevant facts. But that is not the same as structured intelligence. As I write these words I am sitting here getting ready to teach some students enough physics and vector calculus that they can understand Gauss's theorem, Maxwell's equations, the subtleties of EM induction ... and these kids will (if we're lucky) be able to understand all that in a couple of months' time. But if I tried to have a conversation with that IBM Jeopardy computer about these things, would it be able to start understanding, if I took it real slow? No, not at all. If you know something about the techniques and the tricks that the IDIOT BLUE team are using to get their baby to do that stuff, you will know that this is not a step on the road, it is a dead end. What is sad about all this is that AI has been through so many of these cycles. Thinking that dictionary lookup plus a few extras is all you need for intelligence. This is true. It is just that the "few extras" are 99.999% of the problem. Richard Loosemore From eugen at leitl.org Fri Jan 28 16:05:45 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 28 Jan 2011 17:05:45 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> Message-ID: <20110128160545.GB23560@leitl.org> On Fri, Jan 28, 2011 at 10:23:16AM -0500, John Clark wrote: > On Jan 28, 2011, at 8:52 AM, Eugen Leitl wrote: > > >> Richard Loosemore wrote: > > > >> We are a long way away from AGI, unless people start to wake up to the > >> farcical state of affairs in artificial intelligence at the moment. > > > > Finally something we can agree on. > > Have you seen this? > > http://www.youtube.com/watch?v=WFR3lOm_xhE I'm quite aware of IBM's activities in general, and this one in particular. http://www.google.com/search?hl=en&q=site:postbiota.org+jeopardy+IBM > This was just a test run, there is supposed to be a 3 day competition between the 2 all time best human champions and the machine sometime in February. It ain't AI until it's competitive with human jobs. *All* human jobs, across the board. Whether CEO or plumber. Isolated capability peaks mean nothing. You can't fill in a landscape with enough peaks. Doesn't work that way. A human baby starts with very little, and gradually floods the capability landscape on its own. Any artificial equivalent needs to be able to do that at least. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Fri Jan 28 16:00:48 2011 From: spike66 at att.net (spike) Date: Fri, 28 Jan 2011 08:00:48 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <20110128105602.GK23560@leitl.org> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <009b01cbbe67$ca32b1e0$5e9815a0$@att.net> <20110128105602.GK23560@leitl.org> Message-ID: <004101cbbf04$87ee8000$97cb8000$@att.net> ... On Behalf Of Eugen Leitl Subject: Re: [ExI] Help with freezing phenomenon On Thu, Jan 27, 2011 at 06:03:33PM -0500, Mike Dougherty wrote: >> I'm fine with the concept of frozen heads in a tank. It was the >The proper term is patients. Neuropatients, specifically. There are very good reasons to avoid using other terms...Eugen Ja. We must keep in mind that Alcor is actually (and primarily for many people) a mortuary. In many if not most cases, the patient is the strong believer in the notion of cryonics, but the family of the patient is not. They play along to carry out the wishes of the (likely wealthy) deceased. So we need to pay attention to the emotional and even spiritual needs of the family, even if we would prefer to think of Alcor as a hospital for geeks, rather than a trendy spendy mortuary. I could follow that to its logical conclusion: Alcor might consider hiring a minister. Oy vey, heresy. Actually retract that, they probably thought of this long ago and already have one or more of these types. Or failing that, Alcor should have a volunteer religion hipster of some sort, to help chill out the family while they chill the patient. For instance, I previously commented on why it was a really bad idea for the patients to be stacked head downward in the dewar. Is it clear why that is? Something the religion non-hipster would never have thought of: according to tradition (external to the bible) when the Apostle Peter was being crucified, he requested he be crucified upside down, for he considered himself unworthy to be slain in the same manner as Hoerkheimer. Being executed head-down was considered an additional insult as well as a punishment. > ...mental flip-flop (pun intended) from brains to fish and back. From > the verbification of sushi to sushiable we're likely to get to > sashimiable... I also like your term sashimiable, because it could be tossed out at a medical convention as a gag: watch all the doctors pretend they know the definition of sashimiable. Good chance no one would ask. Neither are examples of verbification, since both sushiable and sashimiable are adjectives. Point taken however, for your argument also applies to adjectivization. spike From eugen at leitl.org Fri Jan 28 16:15:49 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 28 Jan 2011 17:15:49 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits In-Reply-To: <4D42E09B.8020308@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E09B.8020308@lightlink.com> Message-ID: <20110128161549.GC23560@leitl.org> On Fri, Jan 28, 2011 at 10:28:27AM -0500, Richard Loosemore wrote: > You are referring to the idea that building an AGI is about "simply" Simply, and, no, there's nothing simple about that. > duplicating the human brain? And that therefore the main obstacle is > having the hardware to do that? In general, the brain is doing something. It is metabolically constrained, so it cannot afford being grossly inefficient. If you look at the details, then you see it's operating pretty close at the limits of what is possible in biology. And biology can run rings around our current capabilities in many critical aspects. As a student of artificial and natural systems I am painfully aware of current limitations to computers. > This is an approach that might be called "blind replication". Copying > without understanding. If you can instantiate an expert at the drop of a hat that's pretty good in my book. > I tried to do that once, when I was a kid. I built an electronic > circuit using a published design, but with no clue how the components > worked, or how the system functioned. > > It turned out that there was one small problem, somewhere in my > implementation. Probably just the one. And so the circuit didn't work. > And since I was blind to the functionality, there was absolutely > nothing I could do about it. I had no idea where to look to fix the > problem. Ah, but we we know quite a lot, and there's a working instance in front of us to go check to compare notes. > To do AGI you need to understand what you are building. The idea of Absolutely disagree. I don't think there's anything understandable in there, at least simply understandable. No neat sheet of equations to write down, and then then run along corridors naked, hollering EUREKA at the top of your lungs. > successfully replicating a system as fantastically complex as the human > brain, without first sorting out the FUNCTIONALITY -- i.e. the software That's just the point, there is no software. It's a physical system with state, which implements different processes at different temporal scales using whatever was close at hand at the time it needed it. > -- is a hollow dream. > > (Not to mention that virtually nobody in the AGI community is actually > trying to do that right now. WBE is done by neuroscientists who seem > not to have thought about these issues much, and they don't call what > they do "AGO") When a field has stuck, it is frequently people from another field who come in, and bring back the torch of illumination. > > >>> The relevance of hardware advances like this is completely unknown >>> until a working design can be supplied. >> >> If you try to track what a given piece of neocortex is doing >> in current hardware you will realize that you need a lot of crunch. > > Track what the neocortex is doing? I am doing that. That is my > research.... except that I am doing it at the high-level, functional I don't see how you can extract anything at the high level without looking at ultrascale to molecular scale. > level. What I am trying to do is understand how the neocortex works, > not how the signals are chasing each other around. Those are two I don't think there's anything there to understand, but of course I don't know that for sure. So you're doing a valuable effort. > different things, like the difference between electronic engineering and > software engineering. Biology doesn't do OSI layers. > And, so far, it looks as though the cortex may be playing a functional > role that can be implemented with a few orders of magnitude less > hardware than the brain uses. Do you have a nice publication track we can take a look at? >>> We are a long way away from AGI, unless people start to wake up to >>> the farcical state of affairs in artificial intelligence at the >>> moment. >> >> Finally something we can agree on. > > Well, we agree on this (as you probably know) for completely different > reasons. Maybe, maybe not. > At least I think we do. If you are saying this because you agree with > the critique in my complex systems paper, I will be a pleasantly Do you have a reference for your complex systems paper to share? > surprised person today. > > > Richard Loosemore > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Fri Jan 28 16:00:05 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 28 Jan 2011 11:00:05 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits In-Reply-To: <4D42E09B.8020308@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E09B.8020308@lightlink.com> Message-ID: <5A67011E-C4EF-4F7D-9436-99398C720491@bellsouth.net> On Jan 28, 2011, at 10:28 AM, Richard Loosemore wrote: > To do AGI you need to understand what you are building. I really don't see what Adjusted Gross Income has to do with it; however we can be absolutely certain that an AI can be made without knowing what you are doing because an intelligence has already been manufactured by Evolution, and Evolution knows no more about intelligence than dog shit knows about table tennis. If random mutation and natural selection, which knows absolutely nothing, can do it then I see no reason why an intelligent organic brain, which at least knows something, can't do the same thing. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Jan 28 16:06:36 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 28 Jan 2011 11:06:36 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D42E853.50706@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> Message-ID: On Jan 28, 2011, at 11:01 AM, Richard Loosemore wrote: > do you have any idea how trivial this is? No. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Jan 28 16:45:26 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 28 Jan 2011 11:45:26 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits In-Reply-To: <20110128161549.GC23560@leitl.org> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E09B.8020308@lightlink.com> <20110128161549.GC23560@leitl.org> Message-ID: <4D42F2A6.5060603@lightlink.com> Eugen Leitl wrote: > On Fri, Jan 28, 2011 at 10:28:27AM -0500, Richard Loosemore wrote: > >> You are referring to the idea that building an AGI is about "simply" > > Simply, and, no, there's nothing simple about that. > >> duplicating the human brain? And that therefore the main obstacle is >> having the hardware to do that? > > In general, the brain is doing something. It is metabolically > constrained, so it cannot afford being grossly inefficient. > If you look at the details, then you see it's operating pretty > close at the limits of what is possible in biology. And biology > can run rings around our current capabilities in many critical > aspects. The brain may be operating efficiently at the chemistry level, but that says nothing about the functional level. The constraints that the "design" of the human brain optimized were determined by accidents of evolution (no metallic-conductor or optical signal lines, for one thing). That does not mean that the functional level can only be duplicated with the same hardware. >> This is an approach that might be called "blind replication". Copying >> without understanding. >> I tried to do that once, when I was a kid. I built an electronic >> circuit using a published design, but with no clue how the components >> worked, or how the system functioned. >> >> It turned out that there was one small problem, somewhere in my >> implementation. Probably just the one. And so the circuit didn't work. >> And since I was blind to the functionality, there was absolutely >> nothing I could do about it. I had no idea where to look to fix the >> problem. > > Ah, but we we know quite a lot, and there's a working instance in front > of us to go check to compare notes. But this was exactly my original point. We do not know quite a lot: the theory, and the engineering understanding of brain functionality, is all shot to pieces. I am stating this as a matter of personal experience with this research field .... stating my opinion of the current state of the art, from my perspective as a cognitive scientist. I may be wrong about the appalling state of our current understanding, but if you and are debating just how good or how bad that understanding is, then the level of understanding IS the issue. Yes, there is a working instance in front of us (and, of all the people at the last AGI conference I went to, I may well have been the one who studies the design of that working instance in the most detail), it turns out to be very hard to use that example, because interpreting the signals that we can get access to is fantastically hard. So interpreting the information that we get from looking at the human brain is -- and this was part of my original point -- extremely theory-dependent. We both agree, I think, that if the folks at the Whole Brain Emulation project were to get a single human brain analog working, and if it should happen that this replica did nothing but gibber, or free-associate, or spend half its time in epileptic fits, debugging that system would require some understanding of how it worked. At that point, understanding the functionality would be everything. You express optimism that "we know quite a lot", etc etc. I disagree. I have seen what the neuroscience people (and the theoretical neuroscience people) have in the way of a theory, and it is so weak it is not even funny. A resurrection of old-school behaviorism, and a lot of statistical signal analysis. That is it. >> To do AGI you need to understand what you are building. The idea of > > Absolutely disagree. I don't think there's anything understandable in > there, at least simply understandable. No neat sheet of equations to > write down, and then then run along corridors naked, hollering EUREKA > at the top of your lungs. But where do you come from when you say this? Do you have two or three decades of detailed understanding of cognitive science under your belt, so we can talk about, for example, the role of the constraint-satisfaction metaphor in connectionism, or the complex-systems problem and its ompact on models of cognition? Have you tried using weak constraint models to understand a broad range of cognitive phenomena? Do you have a feel, yet, for how many of them seem amenable to that treatment, in a unified way? I'm ready to engage in debates at that level, if you want, so we can argue about the current state of progress. But what I hear from you is a complaint about the lack of understandability of the human cognitive system, from someone who is not even part of the community that is trying! ;-) Richard Loosemore From rpwl at lightlink.com Fri Jan 28 16:46:56 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 28 Jan 2011 11:46:56 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> Message-ID: <4D42F300.2080501@lightlink.com> John Clark wrote: > On Jan 28, 2011, at 11:01 AM, Richard Loosemore wrote: > >> do you have any idea how trivial this is? > > No. > > John K Clark That should have been "What is 'no'?" Richard Loosemore From atymes at gmail.com Fri Jan 28 17:41:53 2011 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 28 Jan 2011 09:41:53 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> Message-ID: 2011/1/28 John Clark : > On Jan 28, 2011, at 11:01 AM, Richard Loosemore wrote: > > do you have any idea how trivial this is? > > No. Richard's already stated it well. I have some expertise in the field, and I agree with what he has said here. That said, getting back to the original point: better quantum computers make certain theories of how human-level AI might work, easier to test. If one of those theories turns out to be The One, then yes, we could be a bit closer because of this. If not, we're not. It's impossible to say for sure yet. Even if not, a single computer with 10 billion qubits is potentially commercially usable, for the things we know quantum computers are useful for...if someone commercializes this discovery. ("Exists in lab" is one thing, but "is available to, and practical for use by, people who would benefit from using it" is quite another.) From kanzure at gmail.com Fri Jan 28 17:19:30 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 28 Jan 2011 11:19:30 -0600 Subject: [ExI] Transcendent Man documentary finally available for purchase! In-Reply-To: References: Message-ID: On Fri, Jan 28, 2011 at 5:44 AM, John Grigg wrote: > I have been waiting ages (or at least it feels like it) for this to be > released! > Does anyone have a torrent yet? Shame on Ray. :-( - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Jan 28 17:51:56 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 28 Jan 2011 12:51:56 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> Message-ID: <4D43023C.8060702@lightlink.com> Adrian Tymes wrote: > 2011/1/28 John Clark : >> On Jan 28, 2011, at 11:01 AM, Richard Loosemore wrote: >>> do you have any idea how trivial this is? >> No. > > Richard's already stated it well. I have some expertise in > the field, and I agree with what he has said here. > > That said, getting back to the original point: better quantum > computers make certain theories of how human-level AI > might work, easier to test. If one of those theories turns out > to be The One, then yes, we could be a bit closer because > of this. If not, we're not. It's impossible to say for sure yet. > > Even if not, a single computer with 10 billion qubits is > potentially commercially usable, for the things we know > quantum computers are useful for...if someone > commercializes this discovery. ("Exists in lab" is one > thing, but "is available to, and practical for use by, people > who would benefit from using it" is quite another.) I'll concur with this, and I should add that I don't want to overstate my case. Faster hardware is always useful for doing the research itself. In act, because my own approach to AGI requires the execution of very large numbers of simulation experiments, the more horsepower, the better. So it could speed up my attempts to understand AGI theory. I guess what I am fighting is the idea (which is very common, it seems to me) that lack of horsepower is what is holding back AGI. Richard Loosemore From jonkc at bellsouth.net Fri Jan 28 17:25:38 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 28 Jan 2011 12:25:38 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <20110128160545.GB23560@leitl.org> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> Message-ID: <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> On Jan 28, 2011, at 11:05 AM, Eugen Leitl wrote: > Isolated capability peaks mean nothing. Nothing? The pattern is always the same. Solving calculus problems required intelligence, beating a Chess Grandmaster required intelligence, being a great research Librarian required intelligence, and beating a Jeopardy champion required intelligence; but then computers could do these things better than humans and suddenly we found that these activities had absolutely nothing to do with intelligence. How odd. > It ain't AI until it's competitive with human jobs. Many members of our species won't be satisfied even then. And so just before he was sent into oblivion for eternity the last surviving human being turned to the Jupiter Brain and said "You're not 'REALLY' as smart as me". John K Clark > *All* > human jobs, across the board. Whether CEO or plumber. > > Isolated capability peaks mean nothing. You can't fill in > a landscape with enough peaks. Doesn't work that way. > > A human baby starts with very little, and gradually floods > the capability landscape on its own. Any artificial equivalent > needs to be able to do that at least. > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Jan 28 19:27:53 2011 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 28 Jan 2011 11:27:53 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: On Fri, Jan 28, 2011 at 9:51 AM, Richard Loosemore wrote: > I guess what I am fighting is the idea (which is very common, it seems to > me) that lack of horsepower is what is holding back AGI. Lack of horsepower is one of the things holding back AGI, but hardly the sole thing. Solving it gets us "closer", but there remain wildly different challenges to solve before AGI can be realized. 2011/1/28 John Clark : > Nothing? The pattern is always the same. Solving calculus problems required > intelligence, beating a Chess Grandmaster required intelligence, being a > great research Librarian?required intelligence, and beating a Jeopardy > champion required intelligence; but then computers could do these things > better than humans and suddenly we found that these activities had > absolutely nothing to do with intelligence. How odd. Yep. Because, in the process of solving them, we keep finding tricks and cheats that those who thought "X requires intelligence" didn't conceive of. In general, those who propose that one specific capability and task - which can not be used to learn and support entirely different capabilities and tasks - is "intelligence" aren't thinking very hard about it. > >?It ain't AI until it's competitive with human jobs. > > Many members of our species won't be satisfied even then. More to the point, computers have replaced certain human jobs over the years. How many human telephone operators are employed these days, vs. in the 1960s? > And so just before he was sent into oblivion for eternity the last surviving > human being?turned to the Jupiter Brain and said "You're not 'REALLY' as > smart as me". Yes, but note that the Jupiter Brain was capable of doing that. Watson is not capable of doing anything but Jeopardy, and it certainly didn't learn to do that on its own - rather, several humans figured out how to do it, and codified their thinking into a tool. Get me a computer that can learn to do things it was never programmed or designed to do. (In the broad sense, not "this theorem solver was not specifically programmed with this particular theorem, but it solved it".) Even that might not be true intelligence, but it will be closer than what we have now. Note that the Turing Test is a partial codification of this. Instruct, in ordinary English (or other human language of similar breadth of use - which rules out C++ and similar languages), a computer on how to do a thing it has never done before. Have it do that thing, and improve its own performance, filling in not-explicitly-stated requirements. The "trick" that's widely used for this today is, get humans to state things in highly formal ways. For instance, connecting a telephone call: a telephone number is a number, and there is one obvious almost-all-purpose way to phrase the number. A computer can be programmed to look up that number, look up where you are calling from, and plot out a connection given what it knows of the telephone circuits. But this is a very different problem from, say, taking a typical bureaucracy's ill-documented procedures and figuring out who you need to call to accomplish a certain action - especially when the documentation turns out to be wrong (say, by being out of date). From sparge at gmail.com Fri Jan 28 19:30:17 2011 From: sparge at gmail.com (Dave Sill) Date: Fri, 28 Jan 2011 14:30:17 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: 2011/1/28 John Clark > Nothing? The pattern is always the same. Solving calculus problems required > intelligence, beating a Chess Grandmaster required intelligence, being a > great research Librarian required intelligence, and beating a Jeopardy > champion required intelligence; but then computers could do these things > better than humans and suddenly we found that these activities had > absolutely nothing to do with intelligence. How odd. These isolated systems act intelligent, but they're not really intelligent. They can't learn and they don't understand. Deep Blue could dominate me on the chess board but it couldn't beat a 4-year-old at tic tac toe. Make a system that knows nothing about tic tac toe but can learn the rules (via audio/video explanation by a human) and play the game, and I'll be impressed. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Fri Jan 28 19:43:18 2011 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 28 Jan 2011 11:43:18 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: 2011/1/28 Dave Sill : > These isolated systems act intelligent, but they're not really intelligent. > They can't learn and they don't understand. Deep Blue could dominate me on > the chess board but it couldn't beat a 4-year-old at tic tac toe. Make a > system that knows nothing about tic tac toe but can learn the rules (via > audio/video explanation by a human) and play the game, and I'll be > impressed. Just to toss out a bit for contemplation: How do we know that there is not some similar trick, whereby a system could do this and still not be what we would consider intelligent? Or rather, what kinds of tricks might allow for such a thing? Can't think of any? Neither could those who declared that chess grandmastery required true intelligence...but they might not have known of the types of AI tricks that were to come. There may be a good answer. If there is, it would be useful, in this discussion, to have it. From sparge at gmail.com Fri Jan 28 19:14:24 2011 From: sparge at gmail.com (Dave Sill) Date: Fri, 28 Jan 2011 14:14:24 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D42E853.50706@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> Message-ID: On Fri, Jan 28, 2011 at 11:01 AM, Richard Loosemore wrote: > > > What is sad about all this is that AI has been through so many of these > cycles. Thinking that dictionary lookup plus a few extras is all you need > for intelligence. This is true. It is just that the "few extras" are > 99.999% of the problem. I completely agree that Watson isn't AI, but I disagree that it's trivial. >From an AI perspective, a google search may be trivial, but the ability to search hundreds of thousands of web sites instantly is incredibly useful--and, although it's a simple idea, implementing it is anything but trivial. Thinking of Watson as a next gen search engine one starts to see how important it could be. Sure, I can type a query into google on my phone. And I haven't tried it, but I think I can even speak a query into my phone, though I don't think it'll speak the results back to me. But if I could speak a query to Watson and get a spoken response almost instantly? That would be awesome. "Who played bass on In-a-gadda-da-vida?" "Lee Dorman". With Google that's going to take a few minutes and a couple of searches--and you could easily get the wrong answer. Now imagine a personal Watson that has access your personal data. "What the name of the pizza joint in Peoria I went to back in '05?" That would be handy. How about a chronological list of every known pizza joint I've been to? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Jan 28 20:26:55 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 28 Jan 2011 13:26:55 -0700 Subject: [ExI] Help with freezing phenomenon Message-ID: On Fri, Jan 28, 2011 at 12:45 PM, "spike" wrote: snip > Excellent! ?Thanks Amara. ?I am thinking of alternatives to microwave > temperature control as a means of freezing a brain in a controlled > non-destructive manner. ?A derivative of Keith's idea is to remove tissue > from the nasal cavity and start the freezing process from the top of the > head. ?That way the brain tissue can expand downward into partially > evacuated nasal cavity. ?I am actually leaning more toward that notion than > microwave process control. ?It also obviates splitting the skull. Spike, the point of all current procedures is to vitrify without any freezing at all, i.e., NO expansion. Or are you just making a joke based on the way the brains were extracted from Egyptian mummies? BTW, cracking from thermally induced stress is a known problem. It is believed that there is little if any information loss from the cracking. Keith PS Speaking of Egypt . . . From rpwl at lightlink.com Fri Jan 28 20:36:51 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 28 Jan 2011 15:36:51 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> Message-ID: <4D4328E3.6040809@lightlink.com> Dave Sill wrote: > On Fri, Jan 28, 2011 at 11:01 AM, Richard Loosemore > wrote: > > > What is sad about all this is that AI has been through so many of > these cycles. Thinking that dictionary lookup plus a few extras is > all you need for intelligence. This is true. It is just that the > "few extras" are 99.999% of the problem. > > > I completely agree that Watson isn't AI, but I disagree that it's > trivial. From an AI perspective, a google search may be trivial, but the > ability to search hundreds of thousands of web sites instantly is > incredibly useful--and, although it's a simple idea, implementing it is > anything but trivial. > > Thinking of Watson as a next gen search engine one starts to see how > important it could be. Sure, I can type a query into google on my phone. > And I haven't tried it, but I think I can even speak a query into my > phone, though I don't think it'll speak the results back to me. But if I > could speak a query to Watson and get a spoken response almost > instantly? That would be awesome. "Who played bass on > In-a-gadda-da-vida?" "Lee Dorman". With Google that's going to take a > few minutes and a couple of searches--and you could easily get the wrong > answer. Now imagine a personal Watson that has access your personal > data. "What the name of the pizza joint in Peoria I went to back in > '05?" That would be handy. How about a chronological list of every known > pizza joint I've been to? I really just meant "trivial when taken as a step toward AGI". As a next-gen search engine? Well..... isn't it using a supercomputer just to answer Jeopardy questions? Richard Loosemore From rpwl at lightlink.com Fri Jan 28 20:53:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 28 Jan 2011 15:53:30 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: <4D432CCA.6040008@lightlink.com> Adrian Tymes wrote: > 2011/1/28 Dave Sill : >> These isolated systems act intelligent, but they're not really intelligent. >> They can't learn and they don't understand. Deep Blue could dominate me on >> the chess board but it couldn't beat a 4-year-old at tic tac toe. Make a >> system that knows nothing about tic tac toe but can learn the rules (via >> audio/video explanation by a human) and play the game, and I'll be >> impressed. > > Just to toss out a bit for contemplation: > > How do we know that there is not some similar trick, whereby a system could > do this and still not be what we would consider intelligent? > > Or rather, what kinds of tricks might allow for such a thing? > > Can't think of any? Neither could those who declared that chess grandmastery > required true intelligence...but they might not have known of the types of AI > tricks that were to come. > > There may be a good answer. If there is, it would be useful, in this > discussion, > to have it. There are two answers to your question. First, I don't think anyone seriously said that chess grandmastery required true intelligence .... they knew quite well that they were building AI systems that played chess without general intelligence. What they actually believed was that building a chess AI would *help* them on the road to building a general AI. Second, if someone built an AGI that could develop its own general concepts, and learn new skills by itself, I simply do not believe that anyone would then come along and say "These are just tricks: real intelligence is something more than this". AI folks are fond of that excuse they made up: "As soon as we build a program that does something intelligent, everyone turns around and claims that THAT is not intelligence after all." This is and always was a piece of exaggerated nonsense. What actually happened was that AI programs were able to do some smart things in ways that contained no generality to them, and people outside the field quite rightly pointed out that unless the system was capable of generalizing its knowledge and skills, it was not intelligent. Nobody changed their tune about what "intelligence" really is. Rather, the AI community was caught selling fake goods, and somebody called them on it. Richard Loosemore From sparge at gmail.com Fri Jan 28 21:17:18 2011 From: sparge at gmail.com (Dave Sill) Date: Fri, 28 Jan 2011 16:17:18 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: On Fri, Jan 28, 2011 at 2:43 PM, Adrian Tymes wrote: > Just to toss out a bit for contemplation: > > How do we know that there is not some similar trick, whereby a system could > do this and still not be what we would consider intelligent? > If it's a trick, then the system won't be able to learn and play other games of similar complexity. If there's a "trick" that lets a system learn and play all such simple games, then it's not really a trick because it's generally applicable. Or rather, what kinds of tricks might allow for such a thing? > > Can't think of any? Neither could those who declared that chess > grandmastery > required true intelligence...but they might not have known of the types of > AI > tricks that were to come. > Like Richard I don't think anyone really thought chess required true intelligence. Some people might have thought that we'd never have the hardware to allow looking 20 moves ahead, but that was obviously shortsighted. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Jan 28 21:26:17 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 28 Jan 2011 16:26:17 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> On Jan 28, 2011, at 2:27 PM, Adrian Tymes wrote: > > Yep. Because, in the process of solving them, we keep finding tricks and > cheats that those who thought "X requires intelligence" didn't conceive of. Did it ever occur to you that our own human intelligence is not some mystical thing but just such a collection of "tricks and cheats". Just because you have a very general understanding how it does what it does doesn't mean its not intelligent; you demand the "secret" of intelligence be specific enough to be encoded into the language of ones and zeros but at the same time it must be utterly mysterious, be purposeful but non deterministic, and simultaneously be exact and vague. So with all those contradictory requirements obviously you will never see a machine that you will call intelligent, but you will see a machine that uses "tricks and cheats" to perform any task you care to name better than you can, any task whatsoever. And if that's not "true intelligence" its good enough for me. > Watson is not capable of doing anything but Jeopardy But Jeopardy includes not just having an encyclopedic knowledge of everything from pulsars to pop culture and finding the one and only correctly wanted fact in a vast sea of facts based on a remark that is elliptically phrased, but also in dealing with rhymes and riddles and even puns. To pretend that this is not impressive is silly. > it certainly didn't learn to do that on its own And you didn't learn to do what you do on your own either, you had teachers, you read books written by others and you watched what other people did. > Get me a computer that can learn to do things it was never programmed or > designed to do. Like finding the question to a strangely worded answer neither it nor anybody else on this planet had ever heard before? I just have little patience with the "if a man does it then it's intelligence but if a machine does it then it's not" school of thought. > Note that the Turing Test is a partial codification of this. If Watson can find a good question (even if its not always the correct question) to any answer then that's not too far from the Turing Test. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Jan 28 21:20:13 2011 From: spike66 at att.net (spike) Date: Fri, 28 Jan 2011 13:20:13 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: Message-ID: <00bf01cbbf31$26f256f0$74d704d0$@att.net> ... On Behalf Of Keith Henson Subject: Re: [ExI] Help with freezing phenomenon On Fri, Jan 28, 2011 at 12:45 PM, "spike" wrote: snip >> Excellent! ?Thanks Amara. ?I am thinking of alternatives to microwave >> temperature control as a means of freezing a brain in a controlled >> non-destructive manner... >...Spike, the point of all current procedures is to vitrify without any freezing at all, i.e., NO expansion... There is that, but I myself have not been convinced that vitrification is the way. Perhaps if I studied it more, it would work for me. >...Or are you just making a joke based on the way the brains were extracted from Egyptian mummies? No, I didn't know how the did the mummies. But ja, I do cut up too much. Then when I get an actual idea, it is hard to distinguish from the usual riffs. Wasn't it you who commented about creating room below the brain cavity to allow a bit of expansion? With respect to the vitrification notion, I now think I was mistaken. I might have even dreamed someone said something to that effect, ooopsy. {8-] >...BTW, cracking from thermally induced stress is a known problem. It is believed that there is little if any information loss from the cracking...Keith Ja, that doesn't seem exactly right to me bud. I know the theory is about having nanobots read the configuration and all that, but my intuition tells me that cracks in one's brain is a Bad Thing. In keeping with my earlier notion of cryonics as a marketing task, it would be a Good Thing if you could tell your clients you know how to keep them from cracking. Of course, if you had to tell them you need to drill out their damn nose, that would be a Bad Thing. >...PS Speaking of Egypt . . . I saw that. Crowds of guys out there, and not one of them had that whole sideways hands and feet thing going. spike From sparge at gmail.com Fri Jan 28 21:51:28 2011 From: sparge at gmail.com (Dave Sill) Date: Fri, 28 Jan 2011 16:51:28 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: 2011/1/28 John Clark > Like finding the question to a strangely worded answer neither it nor > anybody else on this planet had ever heard before? > That's exactly what Watson was designed and programmed to do. Make a machine with no Jeopardy-specific programming that can be taught through verbal human instruction to play the game, and that machine will almost certainly pass the Turing Test. Watson isn't even close. > I just have little patience with the "if a man does it then it's > intelligence but if a machine does it then it's not" school of thought. > The key is learning and understanding. It doesn't matter if it's a man or a machine, or if the machine is using one or more clever tricks. A machine that plays one game brilliantly but has no ability to learn other games isn't intelligent. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Jan 28 21:53:55 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 28 Jan 2011 16:53:55 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D432CCA.6040008@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> Message-ID: <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> On Jan 28, 2011, at 3:53 PM, Richard Loosemore wrote: > > I don't think anyone seriously said that chess grandmastery required true intelligence On Jan 28, 2011, at 4:17 PM, Dave Sill wrote: > Like Richard I don't think anyone really thought chess required true intelligence. Godel Escher Bach by Douglas R Hofstadter is the best book I have every read in my life full stop. However on page 678 of this 1979 book he says: "There may be programs that can beat anyone at chess, but they will not be exclusive chess players. They will be programs of GENERAL [his emphasis not mine] intelligence." Twenty years later Hofstadter said that sentence was "embarrassingly bad". John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Jan 28 22:54:56 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 28 Jan 2011 15:54:56 -0700 Subject: [ExI] Transcendent Man documentary finally available for purchase! In-Reply-To: References: Message-ID: Bryan Bishop wrote: >Does anyone have a torrent yet? Shame on Ray. :-( Why "shame on Ray?" John On 1/28/11, Bryan Bishop wrote: > On Fri, Jan 28, 2011 at 5:44 AM, John Grigg > wrote: > >> I have been waiting ages (or at least it feels like it) for this to be >> released! >> > > Does anyone have a torrent yet? Shame on Ray. :-( > > - Bryan > http://heybryan.org/ > 1 512 203 0507 > From spike66 at att.net Fri Jan 28 23:09:39 2011 From: spike66 at att.net (spike) Date: Fri, 28 Jan 2011 15:09:39 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> Message-ID: <010a01cbbf40$7132fc10$5398f430$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark . Godel Escher Bach by Douglas R Hofstadter is the best book I have every read in my life full stop. John K Clark John that makes at least three of us who consider GEB our favorite book of all: You, Eliezer and me. There are likely others. Others, please? spike ps: Today being the 25th anniversary of the Challenger explosion, many of us are haunted by painful memories of that day, which was for some of us the most memorable day of our lives, one we would forget if we could. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Fri Jan 28 23:28:57 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Fri, 28 Jan 2011 16:28:57 -0700 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <001401cbbded$a3239cb0$e96ad610$@att.net> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> Message-ID: <4D435139.8090307@canonizer.com> On 1/26/2011 11:44 PM, spike wrote: > Mine is a fun brain in which to live. Ja! I'm sure looking forward to when we can eff the ineffable, and I can experience what you are talking about first hand. I think your brain would definitely be near the top of brains I'd like to try. I'd also like to try out someones' brain that claims we don't have qualia. But only for the experience, as I bet such would be terrible compared to the phenomenal qualia stuff going on in my brain. I sure have lots of much more than wonderful indescribable or ineffable experiences I would love to share or trade. I bet we'll be doing this and so much more (i.e. phenomenal uploads...) within 25 or so years. The we and the world will be all very different for sure. Brent Allsop "Oh wow, THAT is what it is like to be a Spike! No wonder. Here, try this." From kanzure at gmail.com Fri Jan 28 23:47:32 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 28 Jan 2011 17:47:32 -0600 Subject: [ExI] All videos of Humanity+ @ Caltech talks now posted online In-Reply-To: References: Message-ID: On Fri, Jan 28, 2011 at 5:43 PM, Thomas McCabe wrote: > http://www.youtube.com/user/humanityplusvideos if anyone hates youtube you can also get them here in their full format: http://diyhpl.us/~bryan/humanityplus-at-caltech/ -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Fri Jan 28 23:47:25 2011 From: sparge at gmail.com (Dave Sill) Date: Fri, 28 Jan 2011 18:47:25 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <010a01cbbf40$7132fc10$5398f430$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> <010a01cbbf40$7132fc10$5398f430$@att.net> Message-ID: 2011/1/28 spike > > > John that makes at least three of us who consider GEB our favorite book of > all: You, Eliezer and me. There are likely others. Others, please? > It's my favorite nonfiction book. Not sure how I'd rank it against my favorite fiction, though--if that even makes sense. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Jan 29 00:29:24 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 28 Jan 2011 19:29:24 -0500 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <004101cbbf04$87ee8000$97cb8000$@att.net> References: <000f01cbbddf$aa9f89d0$ffde9d70$@att.net> <009b01cbbe67$ca32b1e0$5e9815a0$@att.net> <20110128105602.GK23560@leitl.org> <004101cbbf04$87ee8000$97cb8000$@att.net> Message-ID: On Fri, Jan 28, 2011 at 11:00 AM, spike wrote: > ... On Behalf Of Eugen Leitl > Subject: Re: [ExI] Help with freezing phenomenon > > On Thu, Jan 27, 2011 at 06:03:33PM -0500, Mike Dougherty wrote: > >>> I'm fine with the concept of frozen heads in a tank. ?It was the > >>The proper term is patients. Neuropatients, specifically. There are very > good reasons to avoid using other terms...Eugen > > Ja. ?We must keep in mind that Alcor is actually (and primarily for many > people) a mortuary. ?In many if not most cases, the patient is the strong > believer in the notion of cryonics, but the family of the patient is not. > They play along to carry out the wishes of the (likely wealthy) deceased. > So we need to pay attention to the emotional and even spiritual needs of the > family, even if we would prefer to think of Alcor as a hospital for geeks, > rather than a trendy spendy mortuary. > Thanks for the "proper term" and admonition to use it in place of alternatives. Aside from the careful storage, is there intent to restore faculty at some later date? That would make the promise sound like a kind of one-way time travel of a duration not otherwise feasible. We've discussed that some people would likely be willing to travel one-way through space for the sake of an incredible adventure - wouldn't a great distance in time be a similarly incredible experience? I know, the average person isn't likely to think of it this way. From atymes at gmail.com Sat Jan 29 00:39:58 2011 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 28 Jan 2011 16:39:58 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: 2011/1/28 John Clark : > And you didn't learn to do what you do on your own either, you had teachers, > you read books written by others and you watched what other people did. In fact, I have had to figure out how to do a great many things on my own, for want of teachers, books, or other people to copy. Not everything, of course, and the exact list depends on what you'd classify as an "example". It's a lot easier with examples or instruction. But, for me, it's not impossible without. From max at maxmore.com Fri Jan 28 23:58:12 2011 From: max at maxmore.com (Max More) Date: Fri, 28 Jan 2011 16:58:12 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <010a01cbbf40$7132fc10$5398f430$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> <010a01cbbf40$7132fc10$5398f430$@att.net> Message-ID: Not sure I'm willing to say it's the best of all, but it would be damn high on my list. I read it when I was about 17, and found it both highly entertaining and enlightening. Max 2011/1/28 spike > > > > > *From:* extropy-chat-bounces at lists.extropy.org [mailto: > extropy-chat-bounces at lists.extropy.org] *On Behalf Of *John Clark > *?* > > > > Godel Escher Bach by Douglas R Hofstadter is the best book I have every > read in my life full stop. John K Clark > > > > > > John that makes at least three of us who consider GEB our favorite book of > all: You, Eliezer and me. There are likely others. Others, please? > > > > spike > > > > > > ps: Today being the 25th anniversary of the Challenger explosion, many of > us are haunted by painful memories of that day, which was for some of us the > most memorable day of our lives, one we would forget if we could. > > > > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Max More Strategic Philosopher Co-founder, Extropy Institute CEO, Alcor Life Extension Foundation 7895 E. Acoma Dr # 110 Scottsdale, AZ 85260 877/462-5267 ext 113 -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Jan 29 02:13:22 2011 From: spike66 at att.net (spike) Date: Fri, 28 Jan 2011 18:13:22 -0800 Subject: [ExI] Help with freezing phenomenon In-Reply-To: <4D435139.8090307@canonizer.com> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> <4D435139.8090307@canonizer.com> Message-ID: <017301cbbf5a$1b398f30$51acad90$@att.net> ... On Behalf Of Brent Allsop Subject: Re: [ExI] Help with freezing phenomenon On 1/26/2011 11:44 PM, spike wrote: >> ...Mine is a fun brain in which to live. >... I think your brain would definitely be near the top of brains I'd like to try... You are too kind sir, and yes I do have fun in here. >... I'd also like to try out someones' brain that claims we don't have qualia... Likewise, yours is a brain I would like to try on, just to figure out what is qualia. I confess I have never understood that concept, but do not feel you must attempt to explain it to me. I have read the qualia posts for years. If I haven't gotten it by now, good chance one more may not help. >... But only for the experience, as I bet such would be terrible compared to the phenomenal qualia stuff going on in my brain. Is (or are) qualia quantized? If so, are qualia quanta called qualons? And if one has exactly one qualon, would the plural form qualia still apply, or would it just be the singular qualium? Would the study of measurement of the amount of qualia be qualiametric qualiology? Sorry Brent, I just don't get it. But do not feel compelled to answer this silliness. I probably will never get this until technology allows me to get inside your brain. >... within 25 or so years. Then we and the world will be all very different for sure... Brent Allsop No need to wait, we and the world are all very different now. Do let us ponder this early and often: how very lucky we are, right here, right now. Compare ourselves to other humans throughout history. There is nothing outside wishing to devour us, we are not cold or hungry. If you argue in fact you are hungry, then go raid the refrigerator, keep in mind you have a refrigerator, and it is full of food. We are all lucky enough to have witnessed a mind-blowing explosion of technology in our own lifetimes. Such an amazing thing is this! We watch in real time as Egypt goes into raging upheaval, watch as our favorite sport goes on, all in real time, without even leaving our own home. We control the information streams like never before. That box sitting in front of you right now is a miracle that so few humans have had the good fortune to own. spike From huffmantm at gmail.com Sat Jan 29 04:19:21 2011 From: huffmantm at gmail.com (Todd Huffman) Date: Sat, 29 Jan 2011 08:49:21 +0430 Subject: [ExI] All videos of Humanity+ @ Caltech talks now posted online In-Reply-To: References: Message-ID: Thanks all! -.- .--- -.... .--- --.- --.- Todd Huffman HuffmanTM at gmail.com Office: (765) 633-2691 Twitter: @toddhuffman On Sat, Jan 29, 2011 at 4:17 AM, Bryan Bishop wrote: > On Fri, Jan 28, 2011 at 5:43 PM, Thomas McCabe wrote: >> >> http://www.youtube.com/user/humanityplusvideos > > if anyone hates youtube you can also get them here in their full format: > http://diyhpl.us/~bryan/humanityplus-at-caltech/ > > -- > - Bryan > http://heybryan.org/ > 1 512 203 0507 > From bbenzai at yahoo.com Sat Jan 29 15:20:26 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 29 Jan 2011 07:20:26 -0800 (PST) Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: Message-ID: <686238.97178.qm@web114420.mail.gq1.yahoo.com> Adrian Tymes wrote: > 2011/1/25 Amon Zero : > > On 25 January 2011 17:04, Adrian Tymes > wrote: > >> Technically, what you speak of is agnosticism. > >> > >> Agnostic = absence of belief > >> Atheist = belief of absence > >> That's the difference between the terms. > > > > That's one definition, but not the only possible one. I 'believe' that is wrong. Agnosticism has nothing to do with belief, but with knowledge. An agnostic says it's not possible to know certain things, either in principle or in practice. Agnostic = absence of knowledge Atheist = absence of belief > It is the one I find in certain online dictionaries: > > http://dictionary.reference.com/browse/atheist > a person who denies or disbelieves the existence of a > supreme being or > beings. > > http://dictionary.reference.com/browse/agnostic > a person who holds that the existence of the ultimate > cause, as god, and the > essential nature of things are unknown and unknowable, or > that human > knowledge is limited to experience. I think you are over-interpreting this. "An atheist is one who denies the existence of a deity or of divine beings" "a person who does not believe in God or gods" That's not the same thing as "a person who believes there are no gods". It may seem a trivial distinction, but I think it's a very important one, as theists tend to use the argument "well, you atheists Believe there is no god, so yours is a Belief as well. Ner ner." This is a totally invalid argument, as long as you understand that atheism is NOT a Belief. It's just the opposite, a lack of Belief. To be honest, I think it's just down to whether people are capable of/willing to think for themselves or not. People who are, don't have Beliefs, because they're not necessary. People who aren't, usually do. You either think, or you Believe. Luther was right when he said that "Reason is the greatest enemy that Faith has". Ben Zaiboc From bbenzai at yahoo.com Sat Jan 29 15:50:26 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 29 Jan 2011 07:50:26 -0800 (PST) Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: Message-ID: <32307.26663.qm@web114415.mail.gq1.yahoo.com> Stefano Vaj wrote: > On 25 January 2011 15:16, Ben Zaiboc > wrote: > > Atheism makes a reasonable assumption, based on the > available evidence (both the logical absurdities and the > lack of physical evidence). ?That's not a Belief. > > Why should we feel bound by "reasonable assumptions"? > > I consider it my duty to disbelieve the existence of the > supreme > entity of monotheistic religions as a matter of faith and > on moral > grounds. ;-) > > I do not see how the fans of those religions could ever > object to this. That's fine, and I wouldn't argue with it a bit. But it's not atheism. Speaking personally, if there really was a Supreme Being, and it was everything it's cracked up to be by the god-squad people, I'd feel morally obliged to oppose it. What kind of person wouldn't? "Yes god loves you, and wants you to suffer horribly forever because you ate peanut butter on a Thursday, and he really doesn't like that kind of behaviour". Ben Zaiboc From jonkc at bellsouth.net Sat Jan 29 16:56:14 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 29 Jan 2011 11:56:14 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <686238.97178.qm@web114420.mail.gq1.yahoo.com> References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> Message-ID: <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> On Jan 29, 2011, at 10:20 AM, Ben Zaiboc wrote: > An agnostic says it's not possible to know certain things, either in principle or in practice. But they do have faith that it's not possible to know certain things either in principle or in practice. Technically I'm an agnostic too in that I can't prove the nonexistence of God, but people who go to great pains to point out that they are a agnostic not a atheist seem a little silly to me because, judging from the equal respect they give to both believers and atheists, they incorrectly think both viewpoints are equally rational. I am certainly not that sort of agnostic, not even technically. This is what Isaac Asimov had to say on the subject in his autobiography: "I am an atheist, out and out. It took me a long time to say it. I've been an atheist for years and years, but somehow I felt it was intellectually unrespectable to say one was an atheist, because it assumed knowledge that one didn't have. Somehow it was better to say one was a humanist or an agnostic. I finally decided that I'm a creature of emotion as well as of reason. Emotionally I am an atheist. I don't have the evidence to prove that God doesn't exist, but I so strongly suspect he doesn't that I don't want to waste my time." > if there really was a Supreme Being, and it was everything it's cracked up to be by the god-squad people, I'd feel morally obliged to oppose it. I agree and so does Asimov: "Properly read, the Bible is the most potent force for atheism ever conceived." John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Jan 29 17:49:34 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 29 Jan 2011 10:49:34 -0700 Subject: [ExI] Help with freezing phenomenon Message-ID: On Sat, Jan 29, 2011 at 5:00 AM, "spike" wrote: > ... On Behalf Of Keith Henson >>...Spike, the point of all current procedures is to vitrify without any > freezing at all, i.e., NO expansion... > > There is that, but I myself have not been convinced that vitrification is > the way. ?Perhaps if I studied it more, it would work for me. Actually, the damage from freezing is dehydration of the cells. Ice forms outside the cells, pulling water out and the salt concentration reaches damaging levels before the concentrated salt solution freezes. This is all fairly well understood because because human embryos are routinely in cryoprotective solutions, cooled to LN2 temperature and later revived for implantation. Tens of thousands of examples walk the streets. > > Wasn't it you who commented about creating room below the brain cavity to > allow a bit of expansion? No. > > In keeping with my earlier notion of cryonics as a marketing task, it would > be a Good Thing if you could tell your clients you know how to keep them > from cracking. We don't. We have a choice of being honest and saying what we are reasonably sure about, and what we don't know how to do, and lying. If this conflicts with marketing, too bad. I don't know if cracking can be solved or not. It's partly an engineering problem and partly an economic problem. I can go into details if enough people are interested. Keith From sparge at gmail.com Sat Jan 29 17:57:30 2011 From: sparge at gmail.com (Dave Sill) Date: Sat, 29 Jan 2011 12:57:30 -0500 Subject: [ExI] interesting new NOVA episodes In-Reply-To: References: Message-ID: Speaking of NOVA, did anyone catch "Dogs Decoded"? I thought the whole thing was fascinating, but two things especially interesting. One was an experiment in which people raised first puppies then wolf cubs. Dogs and wolves are genetically identical, but as the cubs grew up they didn't bond with their people and acted like wild animals. The second was the Russian experiment to domesticate silver foxes by selective breeding. They've been doing it for 50 years and have bred both a calmer strain and a more aggressive-to-humans one. Again, they're all genetically "the same", but when they have a tame mother raise an aggressive cub, it doesn't calm down at all. They've also found that as the calm strain becomes more domesticated, it gets more and more dog-like: shorter tail, curly tail, white markings, etc. It seems like a likely explanation for these two cases is epigenetics, but the NOVA episode said nothing about that. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sat Jan 29 18:11:18 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 29 Jan 2011 19:11:18 +0100 Subject: [ExI] Help with freezing phenomenon In-Reply-To: References: Message-ID: <20110129181118.GF23560@leitl.org> On Sat, Jan 29, 2011 at 10:49:34AM -0700, Keith Henson wrote: > On Sat, Jan 29, 2011 at 5:00 AM, "spike" wrote: > > > ... On Behalf Of Keith Henson > > >>...Spike, the point of all current procedures is to vitrify without any > > freezing at all, i.e., NO expansion... > > > > There is that, but I myself have not been convinced that vitrification is > > the way. ?Perhaps if I studied it more, it would work for me. Vitrification is not trivial to control. But when it works, it works very well. > Actually, the damage from freezing is dehydration of the cells. Ice There are several damage mechanisms. > forms outside the cells, pulling water out and the salt concentration > reaches damaging levels before the concentrated salt solution freezes. > > This is all fairly well understood because because human embryos are > routinely in cryoprotective solutions, cooled to LN2 temperature and > later revived for implantation. Tens of thousands of examples walk > the streets. > > > > Wasn't it you who commented about creating room below the brain cavity to > > allow a bit of expansion? > > No. > > > > In keeping with my earlier notion of cryonics as a marketing task, it would > > be a Good Thing if you could tell your clients you know how to keep them > > from cracking. > > We don't. We have a choice of being honest and saying what we are > reasonably sure about, and what we don't know how to do, and lying. > If this conflicts with marketing, too bad. > > I don't know if cracking can be solved or not. It's partly an > engineering problem and partly an economic problem. Cracking is largely solved by intermediate storage (Brian Wowk). I think it is largely a cosmetic problem. I don't see how anyone would expect that cryopreserved patients would be be rewarmed as is, if at all. > I can go into details if enough people are interested. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Sat Jan 29 18:45:20 2011 From: spike66 at att.net (spike) Date: Sat, 29 Jan 2011 10:45:20 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <32307.26663.qm@web114415.mail.gq1.yahoo.com> References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> Message-ID: <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> Stefano Vaj and Ben Zaiboc wrote (attributions approximate): > Atheism makes a reasonable assumption, based on the available evidence (both the logical absurdities and the lack of physical evidence). ?That's not a Belief... >That's fine, and I wouldn't argue with it a bit. But it's not atheism... > Why should we feel bound by "reasonable assumptions"?... This argument of how to define an atheist is analogous to how we are still trying to define artificial intelligence. Every time we reach a kilometerstone such as a computer chess champion, Jeopardy champion, chat-room Eliza which passes the Turing test and so forth, a chorus of geeks chant in unison (all together now): "But that isn't true intelllllligence." Reason: we understand exactly how it works. Almost by definition, real intelligence has to be something we do not understand. So we keep raising the bar whenever necessary. Apply the lesson to atheism, shall we? Everyone here will freely admit that there is something somewhere at some time a most powerful most smart being in the observable universe. Something somewhere must be in first place, ja? I recognize there might be arguments, analogous to our trying to identify the smartest person in the world. That smartest being *might* be a human, but it is easy enough to imagine an evolved superhuman being. So now, suppose we encounter a powerful being like Star Trek's Q. He is certainly superhuman, he gets things done by mysterious means, he is or can be an evil son of a bitch, so in that sense he resembles the old testament version of god. But it is pretty clear even Q cannot do everything god is supposed to be able to do, so the theologians tell us Q isn't god, and atheists are safe once again. Now suppose we get signals from the cosmos, and they send us the first 200 Mersenne primes. Q doesn't know past 60 of those. We know only 47 of them, after enormous expenditure of computer time and math skills. So whoever or whatever sent those is now defined as god. But wait. What is the 201st Mersenne prime, we pray. God doesn't know of course; she is an unimaginably smart and powerful being, but not infinitely so. Once again we raise the bar over her head, and once again atheists are safe. See the game here? Before we can agree on a definition of atheist, we must be able to agree on a definition of god. I don't see that we have agreement on either. I don't see that we are any closer now than we were a decade ago to an actual definition of the term god. Without that definition, a definition of atheist is meaningless. spike From spike66 at att.net Sat Jan 29 18:57:13 2011 From: spike66 at att.net (spike) Date: Sat, 29 Jan 2011 10:57:13 -0800 Subject: [ExI] solar storm Message-ID: <01e501cbbfe6$578c8000$06a58000$@att.net> WOW check this: http://photoblog.msnbc.msn.com/_news/2011/01/28/5942494-double-whammy-on-the -sun?gt1=43001 It really caught my attention because of this study: http://www.msnbc.msn.com/id/40660701/ns/technology_and_science-space/ I have been pondering this since it came out last month. In principle I don't see any good reason why it must be wrong. If true, this is exciting indeed, a new day for solar astronomy. I always assumed solar storms were always localized, but had no justification for that assumption. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Jan 29 19:11:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 29 Jan 2011 14:11:30 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <010a01cbbf40$7132fc10$5398f430$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> <010a01cbbf40$7132fc10$5398f430$@att.net> Message-ID: <4D446662.3020501@lightlink.com> spike wrote: > From: John Clark > Godel Escher Bach by Douglas R Hofstadter is the best book I have every > read in my life full stop. John K Clark > > > John that makes at least three of us who consider GEB our favorite book > of all: You, Eliezer and me. There are likely others. Others, please? I also. It turned me from a physicist to a cognitive scientist/AI person. Richard Loosemore From msd001 at gmail.com Sat Jan 29 20:30:49 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 29 Jan 2011 15:30:49 -0500 Subject: [ExI] solar storm In-Reply-To: <01e501cbbfe6$578c8000$06a58000$@att.net> References: <01e501cbbfe6$578c8000$06a58000$@att.net> Message-ID: 2011/1/29 spike : > WOW check this: > http://photoblog.msnbc.msn.com/_news/2011/01/28/5942494-double-whammy-on-the-sun?gt1=43001 > > It really caught my attention because of this study: > > http://www.msnbc.msn.com/id/40660701/ns/technology_and_science-space/ My first thought was that it looks like something went through the sphere. Is is possible this is the detection of a particle traveling through space that coincidentally happened to have blown right right the sun? If so, what would happen if something like that pierced the earth? From sparge at gmail.com Sat Jan 29 20:34:02 2011 From: sparge at gmail.com (Dave Sill) Date: Sat, 29 Jan 2011 15:34:02 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> Message-ID: On Sat, Jan 29, 2011 at 1:45 PM, spike wrote: > Every time we reach a > kilometerstone such as a computer chess champion, Jeopardy champion, > chat-room Eliza which passes the Turing test and so forth, a chorus of > geeks > chant in unison (all together now): "But that isn't true intelllllligence." > Reason: we understand exactly how it works. Almost by definition, real > intelligence has to be something we do not understand. So we keep raising > the bar whenever necessary. > Sorry, no, that's not it at all. Real intelligence is the ability to learn, to understand, to make inferences. We may not understand the process today, but I don't think it's beyond our comprehension. BTW, where's this chat-room Eliza that passes the Turing test? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jan 29 20:56:47 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 29 Jan 2011 14:56:47 -0600 Subject: [ExI] test Message-ID: <4D447F0F.8080508@satx.rr.com> I get intermittently rejected. Let's try again. From thespike at satx.rr.com Sat Jan 29 21:01:21 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 29 Jan 2011 15:01:21 -0600 Subject: [ExI] test In-Reply-To: <4D447F0F.8080508@satx.rr.com> References: <4D447F0F.8080508@satx.rr.com> Message-ID: <4D448021.2020702@satx.rr.com> On 1/29/2011 2:56 PM, Damien Broderick wrote: > I get intermittently rejected. Let's try again. Weird. Okay, let's try again with my post sent 2 hours ago: On 1/29/2011 12:45 PM, spike wrote: > Before we can agree on a definition of atheist, we must be able to agree on > a definition of god. A being that is self-subsistent, aware and volitional, and ontologically and etiologically prior to the observable contingent universe that it has brought into existence. That's a very abstract Greek model, and very different from the Roman deities and many other weakly godlike entities in various cultures. But it's what most Western theologicans posit. Damien Broderick From pharos at gmail.com Sat Jan 29 21:12:59 2011 From: pharos at gmail.com (BillK) Date: Sat, 29 Jan 2011 21:12:59 +0000 Subject: [ExI] test In-Reply-To: <4D447F0F.8080508@satx.rr.com> References: <4D447F0F.8080508@satx.rr.com> Message-ID: On Sat, Jan 29, 2011 at 8:56 PM, Damien Broderick wrote: > I get intermittently rejected. Let's try again. > I think most men have that problem. And trying again later is the usual solution. BillK From spike66 at att.net Sat Jan 29 21:13:54 2011 From: spike66 at att.net (spike) Date: Sat, 29 Jan 2011 13:13:54 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> Message-ID: <021a01cbbff9$6f497c80$4ddc7580$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Dave Sill >>. Almost by definition, real intelligence has to be something we do not understand. So we keep raising the bar whenever necessary. spike >.Sorry, no, that's not it at all. Real intelligence is the ability to learn, to understand, to make inferences. (1) to learn, (2) to understand, (3) to make inferences. Two of these have been accomplished and one is difficult to define. A chess program which keeps a memory of which openings it tried and how the game ended, then makes adjustments to its own play is an example of (1) learning. Done. (3) Making inferences (depending on how it is defined) is how Watson operates when playing Jeopardy. Two down, one to go. (2) To understand. Hmmm, to understand. Well you might have me on that one. I would be tempted to point to humans who clearly do not understand. I don't understand a lot of things that others get: human emotions for instance. To argue that (2) to understand is a necessary requisite for intelligence requires further definition. >. We may not understand the process today, but I don't think it's beyond our comprehension. Agreed. I argue that it may be remarkably difficult to recognize intelligence if we see it. We may not understand understanding. >.BTW, where's this chat-room Eliza that passes the Turing test? -Dave I don't know. Does anyone have that? Last time I went looking for it online, it was gone. It's been at least 6 years ago. A professor rigged up a specialized version of Eliza and set it to go into a teen chat room. His reasoning is that most people who went to college in the 70s or 80s probably played with Eliza, but those who were born after about 1985 might not have even heard of it. Sure enough, most of the teens chatted away with it for a while before becoming suspicious. Many of those who did figure it out did so because the responses were so fast. A lot of the entries were "Damn you type fast." One particular discussions went on for over 50 minutes. The participant apparently had no idea he was conversing with software. He definitely spilled some of what I would call innermost thoughts. Hey I did that too back in 1980, but I knew it was just a game. Some argued this was in a sense a successful Turing Test, others argued it doesn't count since many of the participants did not know computers could converse, or simulate conversation. The guy who posted all that stuff may have realized it wasn't quite legitimate and apparently took it down, but there may be references to it somewhere, or quotes from that site. It's been long enough we could probably pull the same gag again with a new set of innocents. Anyone here have that, or a reference to it? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Sat Jan 29 23:21:16 2011 From: sparge at gmail.com (Dave Sill) Date: Sat, 29 Jan 2011 18:21:16 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <021a01cbbff9$6f497c80$4ddc7580$@att.net> References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> <021a01cbbff9$6f497c80$4ddc7580$@att.net> Message-ID: 2011/1/29 spike > > > > (1) to learn, (2) to understand, (3) to make inferences. Two of these have > been accomplished and one is difficult to define. A chess program which > keeps a memory of which openings it tried and how the game ended, then makes > adjustments to its own play is an example of (1) learning. > That's a chess-specific example of learning. I'm talking about general learning. Can one teach Deep Blue to play checkers? Nope. Then Deep Blue isn't really intelligent. Done. (3) Making inferences (depending on how it is defined) is how > Watson operates when playing Jeopardy. > Again, that's domain-specific. If I can sit down and chat with Watson, explain things, ask it questions that require it to make inferences, and get answers that demonstrate that it has done that, then I'll count it. > Two down, one to go. (2) To understand. Hmmm, to understand. Well you > might have me on that one. I would be tempted to point to humans who > clearly do not understand. I don?t understand a lot of things that others > get: human emotions for instance. > I didn't say "understand everything". There's a wide range of understanding of an enormous range of topics demonstrated by the 6 billions proles on the planet at the moment. None of them understand everything and all of them are human-level intelligent. To argue that (2) to understand is a necessary requisite for intelligence > requires further definition. > http://en.wikipedia.org/wiki/Understanding > >? We may not understand the process today, but I don't think it's beyond > our comprehension? > > > > Agreed. I argue that it may be remarkably difficult to recognize > intelligence if we see it. We may not understand understanding. > I don't think we know everything there is to know about it, but most can tell pretty quickly one someone else doesn't understand something. > >?BTW, where's this chat-room Eliza that passes the Turing test? ?Dave > > > > I don?t know. Does anyone have that? Last time I went looking for it > online, it was gone. It?s been at least 6 years ago. A professor rigged up > a specialized version of Eliza and set it to go into a teen chat room. His > reasoning is that most people who went to college in the 70s or 80s probably > played with Eliza, but those who were born after about 1985 might not have > even heard of it. Sure enough, most of the teens chatted away with it for a > while before becoming suspicious. Many of those who did figure it out did > so because the responses were so fast. A lot of the entries were ?Damn you > type fast.? > That's a cute story about some people who were duped by a chatbot. That's not exactly a Turing test, where an interviewer is chatting with two entities, one human, one AI, for the purpose of determining which is which. Fooling unsuspecting people is *way* easier than fooling a competent, informed interviewer. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jan 30 01:07:48 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 29 Jan 2011 19:07:48 -0600 Subject: [ExI] test In-Reply-To: References: <4D447F0F.8080508@satx.rr.com> Message-ID: <4D44B9E4.2000702@satx.rr.com> On 1/29/2011 3:12 PM, BillK wrote: >> I get intermittently rejected. Let's try again. > I think most men have that problem. > And trying again later is the usual solution. I'm guessing this is some kind of sexual jest, but if so it must refer to something I've never experienced. Damien Broderick From darren.greer3 at gmail.com Sun Jan 30 02:00:05 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 29 Jan 2011 22:00:05 -0400 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> Message-ID: >people who go to great pains to point out that they are a agnostic not a atheist seem a little silly to me because, judging from the equal respect they give to both believers and atheists< Who said an agnostic has to afford anything approaching respect to an atheist or a believer based on their belief or lack of it? An agnostic could just as well despise both camps for presuming to know or not know the existence of something for which they have no solid evidence either for or against. This is the point, anyway, where the labels themselves become more important than their denotative meanings, always a breaking down point. I joined the atheist nexus and started blocking the e-mails because so much time was spent discussing what an atheist is and what he believes that I was convinced we were soon to start breaking off into denominations. Lately I just think of myself of someone who simply doesn't believe in the supernatural. That includes gods, witches, archangels and Richard Gere's mythical hamster. And if it you're going to argue any of those things *are* natural, you better be able to prove it, or at least show me tangible, verifiable proof of it that is repeatable by experiment. Or at the very least offer a theory that you can back up with something besides "I am" or "I believe" or "I know." As for the romantic in me, I like Carl Sagan's fictional scenario: if we're going to look for a universal creator, let's start by digging around in irrational numbers. Darren 2011/1/29 John Clark > On Jan 29, 2011, at 10:20 AM, Ben Zaiboc wrote: > > An agnostic says it's not possible to know certain things, either in > principle or in practice. > > > But they do have faith that it's not possible to know certain things either > in principle or in practice. > > Technically I'm an agnostic too in that I can't prove the nonexistence of > God, but people who go to great pains to point out that they are a agnostic > not a atheist seem a little silly to me because, judging from the equal > respect they give to both believers and atheists, they incorrectly think > both viewpoints are equally rational. I am certainly not that sort of > agnostic, not even technically. > > This is what Isaac Asimov had to say on the subject in his autobiography: > > "I am an atheist, out and out. It took me a long time to say it. I've been > an atheist for years and years, but somehow I felt it was intellectually > unrespectable to say one was an atheist, because it assumed knowledge that > one didn't have. Somehow it was better to say one was a humanist or an > agnostic. I finally decided that I'm a creature of emotion as well as of > reason. Emotionally I am an atheist. I don't have the evidence to prove that > God doesn't exist, but I so strongly suspect he doesn't that I don't want to > waste my time." > > if there really was a Supreme Being, and it was everything it's cracked up > to be by the god-squad people, I'd feel morally obliged to oppose it. > > > I agree and so does Asimov: > > "Properly read, the Bible is the most potent force for atheism ever > conceived." > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Sun Jan 30 11:53:26 2011 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 30 Jan 2011 12:53:26 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D446662.3020501@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> <010a01cbbf40$7132fc10$5398f430$@att.net> <4D446662.3020501@lightlink.com> Message-ID: I just don't like it. - Thomas On Sat, Jan 29, 2011 at 8:11 PM, Richard Loosemore wrote: > spike wrote: > >> From: John Clark >> >> Godel Escher Bach by Douglas R Hofstadter is the best book I have every >> read in my life full stop. John K Clark >> >> John that makes at least three of us who consider GEB our favorite book of >> all: You, Eliezer and me. There are likely others. Others, please? >> > > I also. It turned me from a physicist to a cognitive scientist/AI person. > > > > > Richard Loosemore > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Sun Jan 30 12:25:40 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 30 Jan 2011 05:25:40 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> <010a01cbbf40$7132fc10$5398f430$@att.net> <4D446662.3020501@lightlink.com> Message-ID: Tomaz Kristan wrote: >I just don't like it. Why not? John On 1/30/11, Tomaz Kristan wrote: > I just don't like it. > > - Thomas > > On Sat, Jan 29, 2011 at 8:11 PM, Richard Loosemore > wrote: > >> spike wrote: >> >>> From: John Clark >>> >>> Godel Escher Bach by Douglas R Hofstadter is the best book I have every >>> read in my life full stop. John K Clark >>> >>> John that makes at least three of us who consider GEB our favorite book >>> of >>> all: You, Eliezer and me. There are likely others. Others, please? >>> >> >> I also. It turned me from a physicist to a cognitive scientist/AI person. >> >> >> >> >> Richard Loosemore >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > From protokol2020 at gmail.com Sun Jan 30 12:49:02 2011 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 30 Jan 2011 13:49:02 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> <010a01cbbf40$7132fc10$5398f430$@att.net> <4D446662.3020501@lightlink.com> Message-ID: G and E and B are quite different subjects. A quote from the book: *"I was trynig to think how to best to symbolize the unity of Goedel, Escer and Bach ..."* * * Well ... too artificial, if you ask me. On Sun, Jan 30, 2011 at 1:25 PM, John Grigg wrote: > Tomaz Kristan wrote: > >I just don't like it. > > Why not? > > John > > On 1/30/11, Tomaz Kristan wrote: > > I just don't like it. > > > > - Thomas > > > > On Sat, Jan 29, 2011 at 8:11 PM, Richard Loosemore > > wrote: > > > >> spike wrote: > >> > >>> From: John Clark > >>> > >>> Godel Escher Bach by Douglas R Hofstadter is the best book I have every > >>> read in my life full stop. John K Clark > >>> > >>> John that makes at least three of us who consider GEB our favorite book > >>> of > >>> all: You, Eliezer and me. There are likely others. Others, please? > >>> > >> > >> I also. It turned me from a physicist to a cognitive scientist/AI > person. > >> > >> > >> > >> > >> Richard Loosemore > >> > >> > >> > >> _______________________________________________ > >> extropy-chat mailing list > >> extropy-chat at lists.extropy.org > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >> > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Sun Jan 30 13:06:20 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 30 Jan 2011 06:06:20 -0700 Subject: [ExI] test In-Reply-To: <4D44B9E4.2000702@satx.rr.com> References: <4D447F0F.8080508@satx.rr.com> <4D44B9E4.2000702@satx.rr.com> Message-ID: Damien Broderick wrote: I get intermittently rejected. Let's try again. BillK replied: I think most men have that problem. And trying again later is the usual solution. Damien Broderick stated: I'm guessing this is some kind of sexual jest, but if so it must refer to something I've never experienced. >>> This television ad character was based on Damien's life... http://www.youtube.com/watch?v=8Bc0WjTT0Ps http://www.youtube.com/watch?v=fYdwe3ArFWA&NR=1 http://www.youtube.com/watch?v=-ChYmr3xyNU&NR=1 http://www.youtube.com/watch?v=BpwpfmBK41M&NR=1 http://www.youtube.com/watch?v=yYEhzCGHU_U&feature=related http://www.youtube.com/watch?v=YIOdgAeVmfQ&feature=channel Barbara Lamar, his wife, rejected over 1,000 suitors until Damien came along and won her heart like a Greek hero of old... John ; ) On 1/29/11, Damien Broderick wrote: > On 1/29/2011 3:12 PM, BillK wrote: > >>> I get intermittently rejected. Let's try again. > >> I think most men have that problem. >> And trying again later is the usual solution. > > I'm guessing this is some kind of sexual jest, but if so it must refer > to something I've never experienced. > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From jonkc at bellsouth.net Sun Jan 30 17:43:35 2011 From: jonkc at bellsouth.net (John Clark) Date: Sun, 30 Jan 2011 12:43:35 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> Message-ID: <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> On Jan 29, 2011, at 9:00 PM, Darren Greer wrote: >> people who go to great pains to point out that they are a agnostic not a atheist seem a little silly to me because, judging from the equal respect they give to both believers and atheists > > Who said an agnostic has to afford anything approaching respect to an atheist or a believer based on their belief or lack of it? An agnostic could just as well despise both camps Who said? You said, and you said it just above; you describe an irrational agnostic who gives equal respect to both believers and atheists. In fact, regardless of what a dictionary may say, in usage (which is far more important than definitions) the word "agnostic" means pretty much what you say it does. And that's why I call myself a atheist and not a agnostic. > I like Carl Sagan's fictional scenario: if we're going to look for a universal creator, let's start by digging around in irrational numbers. Well, unlike God we know that Chaitin's constant (also called Omega) exists, and if even a few hundred integers in that irrational number were known it would solve most of the problems in number theory, a few thousand would solve them all, unfortunately we also know that Chaitin's constant can not even be approximated by finite beings like us. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Jan 30 17:51:45 2011 From: jonkc at bellsouth.net (John Clark) Date: Sun, 30 Jan 2011 12:51:45 -0500 Subject: [ExI] GEB (was: Re: Oxford scientists edge toward quantum PC with 10b qubits) In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> <010a01cbbf40$7132fc10$5398f430$@att.net> <4D446662.3020501@lightlink.com> Message-ID: <1E84AB35-7060-4F84-9199-48217D1D5E5C@bellsouth.net> On Jan 30, 2011, at 7:49 AM, Tomaz Kristan wrote: > [GEB is] too artificial, if you ask me. There is no disputing matters of taste, but like most people I much prefer a artificial environment to a natural one. People may go camping on the weekend (with lots of high tech equipment of course) but most are back in the city by monday. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Sun Jan 30 18:02:03 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 30 Jan 2011 10:02:03 -0800 (PST) Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: Message-ID: <691491.18760.qm@web114408.mail.gq1.yahoo.com> Darren Greer wrote: > An agnostic could > just as well despise both camps for presuming to know or > not know the > existence of something for which they have no solid > evidence either for or > against. So an agnostic could despise an atheist for presuming to not know the existence of something for which there is no evidence? Well, I suppose you're right, but I don't see why anyone would take any notice of them. > Lately I > just think of myself of someone who simply doesn't believe > in the > supernatural. That includes gods, witches, archangels and > Richard Gere's > mythical hamster. And if it you're going to argue any of > those things *are* > natural, you better be able to prove it, or at least show > me tangible, > verifiable proof of it that is repeatable by > experiment.? Or at the very > least offer a theory that you can back up with something > besides "I am" or > "I believe" or "I know." Ah, so you're an atheist. Ben Zaiboc From thespike at satx.rr.com Sun Jan 30 18:43:45 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 30 Jan 2011 12:43:45 -0600 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <691491.18760.qm@web114408.mail.gq1.yahoo.com> References: <691491.18760.qm@web114408.mail.gq1.yahoo.com> Message-ID: <4D45B161.1010909@satx.rr.com> On 1/30/2011 12:02 PM, Ben Zaiboc wrote: > So an agnostic could despise an atheist for presuming to not know the existence of something for which there is no evidence? This assertion is part of the problem. Few theists would agree that there's "no evidence" (not many say "I believe *because* it is absurd!")--they appeal to reports in sacred records, to alleged miracles, to powerful "experiences of the divine," and to the very existence of the universe. Each of these appeals is corrigible, but it's self-defeating to tell people that they lack evidence when the evidence seems all around them, and within them. You can try "no scientifically testable evidence" but many important aspects of our experience fail that test as well. Damien Broderick From jonkc at bellsouth.net Sun Jan 30 18:39:10 2011 From: jonkc at bellsouth.net (John Clark) Date: Sun, 30 Jan 2011 13:39:10 -0500 Subject: [ExI] Help with freezing phenomenon. In-Reply-To: References: Message-ID: <7B99C835-5616-4E32-A7E4-1B942E3CD03F@bellsouth.net> On Jan 29, 2011, at 12:49 PM, Keith Henson wrote: > I don't know if cracking can be solved or not. It's partly an > engineering problem and partly an economic problem. It would seem to me that cracking in the brain would not be a very serious problem because it would be pretty obvious where the 2 sides of the crack used to be and how they should be lined-up to be repaired. Astronomically more devastating would be any micro-flow in the freezing organ that undergoes turbulence, because untangling a chaotic process and figuring out how things were arranged before the freezing induced turbulence occurred would be very hard, practically impossible, even for a Jupiter Brain. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Jan 30 18:32:29 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 30 Jan 2011 19:32:29 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> Message-ID: 2011/1/29 Dave Sill : > BTW, where's this chat-room Eliza that passes the Turing test? I believe we still have a somewhat mystical concept of intelligence. A contemporary Eliza program probably passes the Turing test over 10 exchanges 10% of the times with 10% of the people. I am not sure that the improvements in this area are exponential, but surely add up one day after another, and I would not be surprised to see in a few years something able to pass the test 25% of the times with 25% of the people over 100 exchanges, and so forth. A "real" AGI is no less and no more than an Eliza program the score of which approximates enough the average human score in the test. Or perhaps exceeds it ("sounds more human than an actual human being")... ;-) This would be per se an interesting achievement. But of course "intelligence", or rather computing power, can be devoted to many other things, and probably will... Ultimately, I believe that besides entertainment and research in the field of psychology the major for "AGI" entities will remain the desire not of creating emulations of a generic human being, but rather of specific ones, to ensure the perennity of the personality of the individual concerned (even those who are not inclined to consider that as a form of literal "immortality" may already have painters work on their portraits exactly for this purpose, after all...). -- Stefano Vaj From jonkc at bellsouth.net Sun Jan 30 18:46:58 2011 From: jonkc at bellsouth.net (John Clark) Date: Sun, 30 Jan 2011 13:46:58 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> <021a01cbbff9$6f497c80$4ddc7580$@att.net> Message-ID: <5961D9EB-3FF1-4CEE-B536-00A41046E980@bellsouth.net> On Jan 29, 2011, at 6:21 PM, Dave Sill wrote: > That's a chess-specific example of learning. I'm talking about general learning. Are you certain people are all that different? Bobby Fischer was probably the greatest human chess player who ever lived and he was a total ignoramus and a thoroughly inferior creature in every other aspect of existence. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sun Jan 30 18:59:08 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sun, 30 Jan 2011 14:59:08 -0400 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: Someone else wrote: >people who go to great pains to point out that they are a agnostic not a atheist seem a little silly to me because, judging from the equal respect they give to both believers and atheists I wrote: >Who said an agnostic has to afford anything approaching respect to an atheist or a believer based on their belief or lack of it? An agnostic could just as well despise both camps John wrote: >Who said? You said, and you said it just above; you describe an irrational agnostic who gives equal respect . . .< I didn't actually describe an agnostic in any such way. I was responding to someone else who gave that description. And my point remains: I don't see why an agnostic by definition should accord respect to a believer or non-believer. It's one thing to say what an agnostic or an atheist or a religionist believes or doesn't believe in general principle, but it's going pretty far down the line to say that these flimsy labels also describe their attitudes towards each other. It's as misleading as saying all atheists are uniformly contemptuous of the concept of transcendence, or all religionists foam at the mouth and handle snakes. >Well, unlike God we know that Chaitin's constant (also called Omega) exists, and if even a few hundred integers in that irrational number were known it would solve most of the problems in number theory, a few thousand would solve them all, unfortunately we also know that Chaitin's constant can not even be approximated by finite beings like us. < Sagan's scenario is interesting though. If suddenly with a decimal expansion of a sextillion integers in pi we find a repeating pattern that describes a perfect circle left there by a super-intelligent species that somehow can manipulate the very fabric of space-time and the mathematical structure of the universe-- not Gods but who would appear to us as Gods--then we have nothing to argue with religionists about. Far-fetched, perhaps, but for me, his point in Contact is well taken: that those who argue most strenuously about the universe are those who often look at it and feel the same sense of wonder and awe. It is the origins of the universe and the wonder it inspires that is contested, not the glory of the thing itself. Darren 2011/1/30 John Clark > On Jan 29, 2011, at 9:00 PM, Darren Greer wrote: > > people who go to great pains to point out that they are a agnostic not a > atheist seem a little silly to me because, judging from the equal respect > they give to both believers and atheists > > > Who said an agnostic has to afford anything approaching respect to an > atheist or a believer based on their belief or lack of it? An agnostic could > just as well despise both camps > > > Who said? You said, and you said it just above; you describe an irrational agnostic > who gives equal respect to both believers and atheists. In fact, regardless > of what a dictionary may say, in usage (which is far more important than > definitions) the word "agnostic" means pretty much what you say it does. > And that's why I call myself a atheist and not a agnostic. > > I like Carl Sagan's fictional scenario: if we're going to look for a > universal creator, let's start by digging around in irrational numbers. > > > Well, unlike God we know that Chaitin's constant (also called Omega) > exists, and if even a few hundred integers in that irrational number were > known it would solve most of the problems in number theory, a few thousand > would solve them all, unfortunately we also know that Chaitin's constant can > not even be approximated by finite beings like us. > > John K Clark > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Sun Jan 30 19:40:13 2011 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 30 Jan 2011 20:40:13 +0100 Subject: [ExI] GEB (was: Re: Oxford scientists edge toward quantum PC with 10b qubits) In-Reply-To: <1E84AB35-7060-4F84-9199-48217D1D5E5C@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <4D432CCA.6040008@lightlink.com> <0B827B42-B057-4087-AB78-D6715F4C6FAC@bellsouth.net> <010a01cbbf40$7132fc10$5398f430$@att.net> <4D446662.3020501@lightlink.com> <1E84AB35-7060-4F84-9199-48217D1D5E5C@bellsouth.net> Message-ID: I like artificial.But not logically artificial. T. 2011/1/30 John Clark > On Jan 30, 2011, at 7:49 AM, Tomaz Kristan wrote: > > [GEB is] too artificial, if you ask me. > > > There is no disputing matters of taste, but like most people I much prefer > a artificial environment to a natural one. People may go camping on the > weekend (with lots of high tech equipment of course) but most are back in > the city by monday. > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Jan 30 19:34:19 2011 From: spike66 at att.net (spike) Date: Sun, 30 Jan 2011 11:34:19 -0800 Subject: [ExI] fischer, positive feeback loops Message-ID: <00a601cbc0b4$b1187f10$13497d30$@att.net> .. On Jan 29, 2011, at 6:21 PM, Dave Sill wrote: >>.That's a chess-specific example of learning. I'm talking about general learning. >.Are you certain people are all that different? Bobby Fischer was probably the greatest human chess player who ever lived and he was a total ignoramus and a thoroughly inferior creature in every other aspect of existence.. John K Clark Some non-players who met Fischer thought he was retarded. {8^D Of course at the chess club he was treated as a god. Grown men would hang on his every word when he was elementary school age. Soon he couldn't function in the real world, but in that artificial world of chess, he was the undisputed king. It is easy to see why such a person would hang out at the club. That is a classic positive feedback loop. I am thinking ways of making positive feedback loops in artificial intelligence software. The most remarkable thing is that such a giant emerged from the US, which has long been considered a chess backwater. On a lighter note, this morning an American won a top level chess championship which included the top four highest rated players in the world, along with nine others, in a 13 round tournament: http://www.chessbase.com/newsdetail.asp?newsid=6983 This is the first time an American has been at the top of the chess heap since Fischer retired 35 years ago. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Jan 30 20:04:01 2011 From: pharos at gmail.com (BillK) Date: Sun, 30 Jan 2011 20:04:01 +0000 Subject: [ExI] fischer, positive feeback loops In-Reply-To: <00a601cbc0b4$b1187f10$13497d30$@att.net> References: <00a601cbc0b4$b1187f10$13497d30$@att.net> Message-ID: 2011/1/30 spike wrote: > Some non-players who met Fischer thought he was retarded. > > > On a lighter note, this morning an American won a top level chess > championship which included the top four highest rated players in the world, > along with nine others, in a 13 round tournament: > > http://www.chessbase.com/newsdetail.asp?newsid=6983 > > This is the first time an American has been at the top of the chess heap > since Fischer retired 35 years ago. > > American???? With a name like Hikaru Nakamura???? Wikipedia says was he was born in Hirakata, Osaka Prefecture, Japan, to a Japanese father and an American mother. At the age of two, he moved with his parents to the United States. So I'll give you half-American. ;) BillK From eugen at leitl.org Sun Jan 30 20:18:21 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 30 Jan 2011 21:18:21 +0100 Subject: [ExI] Help with freezing phenomenon. In-Reply-To: <7B99C835-5616-4E32-A7E4-1B942E3CD03F@bellsouth.net> References: <7B99C835-5616-4E32-A7E4-1B942E3CD03F@bellsouth.net> Message-ID: <20110130201821.GN23560@leitl.org> On Sun, Jan 30, 2011 at 01:39:10PM -0500, John Clark wrote: > On Jan 29, 2011, at 12:49 PM, Keith Henson wrote: > > > I don't know if cracking can be solved or not. It's partly an > > engineering problem and partly an economic problem. > > It would seem to me that cracking in the brain would not be a very serious problem because it would be pretty obvious where the 2 sides of the crack used to be and how they should be lined-up to be repaired. Astronomically more devastating would be any micro-flow in the freezing organ that undergoes turbulence, because untangling a chaotic process and figuring out how things were arranged before the freezing induced turbulence occurred would be very hard, practically impossible, even for a Jupiter Brain. A frequent damage mode is growth of ice needles, creating an interdigitated volume, eventually enclosed, which in conjuction with volume expansion creates very high pressures in the enclosed volume and rapid flow through the interdigitated needles and channels across potentially large (100-1000 um) distances. In general you see large regions of ice with small compressed volumes of original tissue -- a very striking difference from vitrified tissue, which appears essentially identical to controls. Information erasure in physical processes is possible, and in fact it happens frequently. (The information leaves the volume at the speed of light, so you need effectively omniscient and omnipotent systems to reverse -- while the Omega point theory has been falsified). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Sun Jan 30 20:37:42 2011 From: spike66 at att.net (spike) Date: Sun, 30 Jan 2011 12:37:42 -0800 Subject: [ExI] fischer, positive feeback loops In-Reply-To: References: <00a601cbc0b4$b1187f10$13497d30$@att.net> Message-ID: <00bd01cbc0bd$8b2830d0$a1789270$@att.net> ...> On Behalf Of BillK ... 2011/1/30 spike wrote: > >...Some non-players who met Fischer thought he was retarded. > > >>... On a lighter note, this morning an American won a top level chess championship ... > > http://www.chessbase.com/newsdetail.asp?newsid=6983 ... >American???? With a name like Hikaru Nakamura???? >Wikipedia says was he was born in Hirakata, Osaka Prefecture, Japan, to a Japanese father and an American mother. At the age of two, he moved with his parents to the United States. >So I'll give you half-American. ;) BillK {8^D Ooo, careful with that one, BillK. Had a USian made that comment, the yahoos would read into it a criticism of our current president. Of course being British, we will give you a hall pass. {8^D spike From thespike at satx.rr.com Sun Jan 30 21:11:48 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 30 Jan 2011 15:11:48 -0600 Subject: [ExI] fischer, positive feeback loops In-Reply-To: <00bd01cbc0bd$8b2830d0$a1789270$@att.net> References: <00a601cbc0b4$b1187f10$13497d30$@att.net> <00bd01cbc0bd$8b2830d0$a1789270$@att.net> Message-ID: <4D45D414.4050105@satx.rr.com> On 1/30/2011 2:37 PM, spike wrote: >> Wikipedia says was he was born in Hirakata, Osaka Prefecture, Japan, to a > Japanese father and an American mother. At the age of two, he moved with his > parents to the United States. > > Ooo, careful with that one, BillK. Had a USian made that comment, the > yahoos would read into it a criticism of our current president. You think being born in Hawaii in 1961 is equivalent to being born in Japan? No, actually, Hawaii has been a US State since 1959. Or so we are taught in Australia (not yet a US State). Damien Broderick From stefano.vaj at gmail.com Sun Jan 30 22:20:19 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 30 Jan 2011 23:20:19 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <4D45B161.1010909@satx.rr.com> References: <691491.18760.qm@web114408.mail.gq1.yahoo.com> <4D45B161.1010909@satx.rr.com> Message-ID: On 30 January 2011 19:43, Damien Broderick wrote: > Each > of these appeals is corrigible, but it's self-defeating to tell people that > they lack evidence when the evidence seems all around them, and within them. > You can try "no scientifically testable evidence" but many important aspects > of our experience fail that test as well. I agree. Moreover, the real issues with middle east-style supposed Supreme Beings, besides being "ideological", are logical, not empirical. While Spiderman or Silver Surfer or the gods of most religions might in principle exist as well in the empirical realm (even though this is not actually a claim of most of their fans...), and a discussion thereupon could therefore make sense to an extent, I am inclined to consider hypotheses concerning Jahv? or Allah "not even wrong", and would not dignify them with arguments on the burden of proof or indirect evidence supposedly in place. -- Stefano Vaj From spike66 at att.net Mon Jan 31 00:06:03 2011 From: spike66 at att.net (spike) Date: Sun, 30 Jan 2011 16:06:03 -0800 Subject: [ExI] fischer, positive feeback loops In-Reply-To: References: <00a601cbc0b4$b1187f10$13497d30$@att.net> Message-ID: <00e301cbc0da$a6617740$f32465c0$@att.net> ... >>>American???? With a name like Hikaru Nakamura???? >>> Wikipedia says was he was born in Hirakata, Osaka Prefecture, Japan, >>> to a Japanese father and an American mother. At the age of two, he moved >>> with his parents to the United States. BillK >> Ooo, careful with that one, BillK. Had a USian made that comment, the >> yahoos would read into it a criticism of our current president. >You think being born in Hawaii in 1961 is equivalent to being born in Japan? No, actually, Hawaii has been a US State since 1959. Or so we are taught in Australia (not yet a US State)...Damien Broderick No. What I meant was the comment "American???? With a name like..." can easily be interpreted as a snarky reference to the name Obama. For the record, I am not one of those who thinks Obama was born in Kenya or anywhere outside of Hawaii. One of our own highly esteemed ExI types was also born in Hawaii in 1961: Amara. {8-] spike From hkeithhenson at gmail.com Mon Jan 31 00:31:17 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 30 Jan 2011 17:31:17 -0700 Subject: [ExI] atheists declare religions as scams. Message-ID: snip I think atheists would be much better off to try to understand why (in an evolutionary sense) humans have religions at all. I make one case, perhaps there are better explanations. Keith From thespike at satx.rr.com Mon Jan 31 01:55:26 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 30 Jan 2011 19:55:26 -0600 Subject: [ExI] fischer, positive feeback loops In-Reply-To: <00e301cbc0da$a6617740$f32465c0$@att.net> References: <00a601cbc0b4$b1187f10$13497d30$@att.net> <00e301cbc0da$a6617740$f32465c0$@att.net> Message-ID: <4D46168E.7010106@satx.rr.com> On 1/30/2011 6:06 PM, spike wrote: > >> >You think being born in Hawaii in 1961 is equivalent to being born in > Japan? No, actually, Hawaii has been a US State since 1959. Or so we are > taught in Australia (not yet a US State)...Damien Broderick > No. What I meant was the comment "American???? With a name like..." can > easily be interpreted as a snarky reference to the name Obama. Ah, okay, good. Blame my short attention span from one sentence to the next. OMG, I'm finally turning into an American! Damien Broderick From sparge at gmail.com Mon Jan 31 02:34:01 2011 From: sparge at gmail.com (Dave Sill) Date: Sun, 30 Jan 2011 21:34:01 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <5961D9EB-3FF1-4CEE-B536-00A41046E980@bellsouth.net> References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> <021a01cbbff9$6f497c80$4ddc7580$@att.net> <5961D9EB-3FF1-4CEE-B536-00A41046E980@bellsouth.net> Message-ID: 2011/1/30 John Clark > Are you certain people are all that different? > Yes, completely. > Bobby Fischer was probably the greatest human chess player who ever lived > and he was a total ignoramus and a thoroughly inferior creature in every > other aspect of existence. > Nonetheless, I'm confident that he could have learned checkers and did learn many things besides chess. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Jan 31 03:27:26 2011 From: spike66 at att.net (spike) Date: Sun, 30 Jan 2011 19:27:26 -0800 Subject: [ExI] fischer, positive feeback loops In-Reply-To: <4D46168E.7010106@satx.rr.com> References: <00a601cbc0b4$b1187f10$13497d30$@att.net> <00e301cbc0da$a6617740$f32465c0$@att.net> <4D46168E.7010106@satx.rr.com> Message-ID: <010001cbc0f6$c88b47d0$59a1d770$@att.net> ... On Behalf Of Damien Broderick ... >> No. What I meant was the comment "American???? With a name like..." >> can easily be interpreted as a snarky reference to the name Obama. >OMG, I'm finally turning into an American! Damien Broderick American???? With a name like Damien???? {8^D I have been watching the mainstream news. Not one of the mainstreamers thought it newsworthy that an actual AMERICAN guy won the biggest international chess competition of the year. Oops, retract. The New York Times just popped up with it, 21 minutes ago. Of course the NYT has its own chess blog. I want to see if any of the majors will carry this as a news feature or even as a sports feature. In any case, Nakamura will likely return to St. Louis with perhaps a dozen fans to greet and congratulate him at the airport. No ticker tape parade, nothing like what a chess champion of pretty much *any* other nation would enjoy. Jeez we are primitive savages. spike From spike66 at att.net Mon Jan 31 03:45:28 2011 From: spike66 at att.net (spike) Date: Sun, 30 Jan 2011 19:45:28 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> <021a01cbbff9$6f497c80$4ddc7580$@att.net> <5961D9EB-3FF1-4CEE-B536-00A41046E980@bellsouth.net> Message-ID: <010401cbc0f9$4d9c26e0$e8d474a0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Dave Sill . Bobby Fischer was probably the greatest human chess player who ever lived and he was a total ignoramus and a thoroughly inferior creature in every other aspect of existence. Nonetheless, I'm confident that he could have learned checkers and did learn many things besides chess.-Dave This was an extremely interesting debate back in the day. There were those who claimed Fischer was extremely intelligent, as evidenced by his ability to speak Russian, German, French and Spanish, in addition to his native English. Others pointed out that he didn't really speak these languages, but rather had mastered the specific chess terms in those languages. The game does have its own specialized language. Speakers of those languages in the chess world may not have even realized Fischer had no idea how to converse on other topics outside the door of that club, even in his native language. My theory is that Fischer was very intelligent and super focused on the game, to the total exclusion of everything else, to the point of complete dysfunctional obsessive compulsive. He came up with ideas over the board that amaze fans to this day, made what looked like crazy sacrifices that were perfectly sound, and resulted in his smashing his opponent off the board 12 or 15 moves down the road. Fischer focused everything he had on the 8x8 board. That was his whole world. Smart and super focused: a winning combination. Or losing, depending on how one defines win and loss. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Mon Jan 31 03:50:29 2011 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 30 Jan 2011 19:50:29 -0800 (PST) Subject: [ExI] Upright Ape Message-ID: <361822.72912.qm@web65608.mail.ac4.yahoo.com> http://www.ibtimes.com/articles/106300/20110128/ambam-the-gorilla-walking-upright-videos.htm Ambam comes from a family of gorillas that can walk upright. Isn't?evolution beautiful?? Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From eugen at leitl.org Mon Jan 31 10:57:59 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 31 Jan 2011 11:57:59 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: Message-ID: <20110131105759.GU23560@leitl.org> On Sun, Jan 30, 2011 at 05:31:17PM -0700, Keith Henson wrote: > snip > > I think atheists would be much better off to try to understand why (in > an evolutionary sense) humans have religions at all. http://postbiota.org/pipermail/tt/2010-December/008311.html referencing http://www.springerlink.com/content/m0v73485k8t58571/ > I make one case, perhaps there are better explanations. > > Keith > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From kellycoinguy at gmail.com Mon Jan 31 07:44:11 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 00:44:11 -0700 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <32307.26663.qm@web114415.mail.gq1.yahoo.com> <01e401cbbfe4$ae53fe60$0afbfb20$@att.net> Message-ID: What of humans who fail the Turing test? I was on a technical support chat that was so disconnected that I accused the poor sap of being an artificial intelligence. Eventually, he insisted upon being human to such an extent that I think an AI would have given in and admitted it was an AI, so I kind of believed that he was a human. Albeit not a very convincing one. Kind of sucks to be him. -Kelly From stefano.vaj at gmail.com Mon Jan 31 15:49:39 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 31 Jan 2011 16:49:39 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: References: Message-ID: On 31 January 2011 01:31, Keith Henson wrote: > I think atheists would be much better off to try to understand why (in > an evolutionary sense) humans have religions at all. As long as a given form of atheism is not afraid of being a religion itself, I do not believe that it may pose such a great problem... BTW. there are already plenty of religions which are "atheistic" from a christian or islamic POV... -- Stefano Vaj From kellycoinguy at gmail.com Mon Jan 31 16:26:02 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 09:26:02 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D42E853.50706@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> Message-ID: On Fri, Jan 28, 2011 at 9:01 AM, Richard Loosemore wrote: > John Clark wrote: >> >> http://www.youtube.com/watch?v=WFR3lOm_xhE > Yes, but do you have any idea how trivial this is? Trivial!?! This is the final result of decades of research in both software and hardware. Hundreds of thousands of man hours have gone into the projects that directly led to this development. Trivial! You have to be kidding. The subtle language cues that are used on Jeopardy are not easy to pick up on. This is a really major advance in AI. I personally consider this to be a far more impressive achievement than Deep Blue learning to play chess. > The IBM computer playing Jeopardy is just a glorified version of Winograd's > SHRDLU. ?With enough information it can home in on answers to simple > questions. ?Doing that kind of stuff is like winning a barellized > fish-shooting context: ?if you have a big enough encyclopaedia in there, and > you do a fast enough search, you get near to the relevant facts. But that is > not the same as structured intelligence. Whatever you want to call the technological form, or despite the geneaological roots of this particular software/hardware, it is not trivial, and it is not shooting fish in a barrel. Having the facts, yes that is easy enough. Gathering the facts from natural language sources, parsing natural language questions (answers in this case) and answering them (questioning them in this case) is pretty dang cool. I would not call it trivial. > As I write these words I am sitting here getting ready to teach some > students enough physics and vector calculus that they can understand Gauss's > theorem, Maxwell's equations, the subtleties of EM induction ... and these > kids will (if we're lucky) be able to understand all that in a couple of > months' time. > > But if I tried to have a conversation with that IBM Jeopardy computer about > these things, would it be able to start understanding, if I took it real > slow? ?No, not at all. ?If you know something about the techniques and the > tricks that the IDIOT BLUE team are using to get their baby to do that > stuff, you will know that this is not a step on the road, it is a dead end. Here you may be right. Strong AI will probably not be based on this technology. However, I would certainly rather speak to Watson than the typical idiot I get on the phone on the typical technical support call. I think that's what IBM is really about here. You don't need Strong AI to answer domain specific questions like that, and this software apparently can do that. Granted, it's a lot of hardware today, but Moore's law will take care of that shortly. > What is sad about all this is that AI has been through so many of these > cycles. ?Thinking that dictionary lookup plus a few extras is all you need > for intelligence. ?This is true. ?It is just that the "few extras" are > 99.999% of the problem. Richard, do you think computers will achieve Strong AI eventually? I do. I think it will likely come from reverse engineering brains, or perhaps creating bio computers like those fun little robots that run around on rat brains (see Youtube). -Kelly From kellycoinguy at gmail.com Mon Jan 31 16:36:24 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 09:36:24 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: 2011/1/28 Dave Sill : > 2011/1/28 John Clark >> > These isolated systems act intelligent, but they're not really intelligent. > They can't learn and they don't understand. Deep Blue could dominate me on > the chess board but it couldn't beat a 4-year-old at tic tac toe. Make a > system that knows nothing about tic tac toe but can learn the rules (via > audio/video explanation by a human) and play the game, and I'll be > impressed. Actually, it is pretty trivial for a computer to learn tic-tac-toe without any explanation at all. With a 1970s era neural network, the only feedback required for it to learn the rules of the game are whether it won or lost. It even learns to take turns if you define "lose" as playing with the wrong "color" or taking two turns in a row. This is not particularly impressive of course because tic-tac-toe is a pretty small game tree. Now, if you want a challenge for a computer, try the oriental board game Go. As far as I know, there aren't any computers that can grok that as good as people yet. I'm sure it's coming soon though. :-) I think the problem is really related to the definition of intelligence. Nobody has really defined it, so the definition seems to fall out as "Things people do that computers don't do yet." So what is "Things computers do that people can't do"? Certainly it is not ALL trivial stuff. For example, using genetic algorithms, computers have designed really innovative jet engines that no people ever considered. Is that artificial intelligence (i.e. the kind people can't do?) -Kelly From kellycoinguy at gmail.com Mon Jan 31 16:57:00 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 09:57:00 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: 2011/1/28 Dave Sill : > 2011/1/28 John Clark >> >> Like finding the question to a strangely worded answer neither it nor >> anybody else on this planet had ever heard before? > > That's exactly what Watson was designed and programmed to do. Make a machine > with no Jeopardy-specific programming that can be taught through verbal > human instruction to play the game, and that machine will almost certainly > pass the Turing Test. Watson isn't even close. So when IBM creates a machine with the specific programming task of "Pass the Turing Test" that won't be intelligence either, because it was programmed to pass the Turing test... right??? Again, I just don't think anyone has a clue how to define intelligence or consciousness. We have trouble knowing whether fellow human beings are really conscious. Ayn Rand seemed to think most of her fellow travelers were, in fact, not conscious. I don't know that I go along with that level of skepticism, but most of us have had the childish delusion that we ourselves were the only "real" people. If we can't even figure out for sure if each other as humans are conscious, how are we going to determine if a machine is? >> I just have little patience with the "if a man does it then it's >> intelligence but if a machine does it then it's not" school of thought. > > The key is learning and understanding. It doesn't matter if it's a man or a > machine, or if the machine is using one or more clever tricks. A machine > that plays one game brilliantly but has no ability to learn other games > isn't intelligent. The right question here seems to me to be "Does Watson Learn?" Everything I have read seems to indicate that Watson knows answers to questions because Watson has processed a huge amount of free text from the Internet or perhaps Wikipedia or something. The point is that nobody sat down and programmed Watson to answer specific questions. This seems like "learning" by "reading" to me, and if so, that is a tremendous new capability (at least at this level of utility) for computers. If you asked Watson questions about Jeopardy, I'd bet it could answer a lot of them. It isn't that it "knows" anything. I don't have any belief that Watson is conscious or anything like that. But there are days that even Google "seems" to be intelligent... Wolfram Alpha is another step in this same general direction. Will general intelligence emerge out of these kinds of inference engines? I tend to think not, but maybe. As for GEB, I haven't read it for a VERY long time, but I recall that I liked the Bach and Esher parts, but the Godel stuff didn't resonate. -Kelly From sparge at gmail.com Mon Jan 31 17:05:10 2011 From: sparge at gmail.com (Dave Sill) Date: Mon, 31 Jan 2011 12:05:10 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: On Mon, Jan 31, 2011 at 11:36 AM, Kelly Anderson wrote: > 2011/1/28 Dave Sill : > > > These isolated systems act intelligent, but they're not really > intelligent. > > They can't learn and they don't understand. Deep Blue could dominate me > on > > the chess board but it couldn't beat a 4-year-old at tic tac toe. Make a > > system that knows nothing about tic tac toe but can learn the rules (via > > audio/video explanation by a human) and play the game, and I'll be > > impressed. > > Actually, it is pretty trivial for a computer to learn tic-tac-toe > without any explanation at all. Perhaps, but I don't think it's trivial for a computer to learn it via an explanation, and the communication, reasoning, problem solving, and understanding required to do so make it a good test of real intelligence. Of course, tic-tac-toe is just an example. If I were tasked to conduct a Turing test I wouldn't use tic-tac-toe, I'd make up a simple game of my own. > Now, if you want a challenge for a computer, try the oriental board > game Go. As far as I know, there aren't any computers that can grok > that as good as people yet. I'm sure it's coming soon though. :-) > No doubt. And it'll be impressive. But it'll still just be a Go computer and not generally intelligent. I think the problem is really related to the definition of > intelligence. Nobody has really defined it... Wikipedia has a pretty good one: "Intelligence is an umbrella term describing a property of the mind including related abilities, such as the capacities for abstract thought, understanding, communication, reasoning, learning, learning from past experiences, planning, and problem solving." ... so the definition seems to > fall out as "Things people do that computers don't do yet." I disagree. Show me a computer that meets the above definition of intelligence at an average human level. > So what is > "Things computers do that people can't do"? Certainly it is not ALL > trivial stuff. For example, using genetic algorithms, computers have > designed really innovative jet engines that no people ever considered. > Is that artificial intelligence (i.e. the kind people can't do?) You mean that people have designed and used programs with genetic algorithms to create innovative designs. Or did a computer wake up one day and say "hey, I've got wicked new idea for a jet engine!"? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 31 17:02:42 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 31 Jan 2011 12:02:42 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: On Jan 30, 2011, at 1:59 PM, Darren Greer wrote: > I don't see why an agnostic by definition should accord respect to a believer or non-believer. There is not one scrap of evidence that a teapot is in orbit around the planet Uranus and there is not one scrap of evidence that there is not such a vessel; nevertheless I do not treat both possibilities with equal respect so I am not a Uranus teapot agnostic. I am a teapot atheist. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Jan 31 17:21:09 2011 From: sparge at gmail.com (Dave Sill) Date: Mon, 31 Jan 2011 12:21:09 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: On Mon, Jan 31, 2011 at 11:57 AM, Kelly Anderson wrote: > > > So when IBM creates a machine with the specific programming task of > "Pass the Turing Test" that won't be intelligence either, because it > was programmed to pass the Turing test... right??? > Wrong, because the Turing test is designed to test general intelligence. And there's no "pass the Turing test". It's not like the SATs, there's not one single Turing test that, if passed, grants an AI a certificate of intelligence. But an AI that accumulates a record in various Turing tests against different interviewers equivalent to its human competitors would demonstrate human-equivalent intelligence. Again, I just don't think anyone has a clue how to define intelligence > or consciousness. Intelligence is pretty straightforward. See Wikipedia. What does consciousness have to do it, though? > > The key is learning and understanding. It doesn't matter if it's a man > or a > > machine, or if the machine is using one or more clever tricks. A machine > > that plays one game brilliantly but has no ability to learn other games > > isn't intelligent. > > The right question here seems to me to be "Does Watson Learn?" > Everything I have read seems to indicate that Watson knows answers to > questions because Watson has processed a huge amount of free text from > the Internet or perhaps Wikipedia or something. The point is that > nobody sat down and programmed Watson to answer specific questions. > This seems like "learning" by "reading" to me, and if so, that is a > tremendous new capability (at least at this level of utility) for > computers. > It's learning in the sense that Google "learns" what's on the web by sucking down a copy of it. OK, it's a little more sophisticated than that since it has to do some parsing. But does Watson learn from its mistakes? Does it learn from its opponent's successes? I don't know. Does it understand anything? I doubt it. If you asked Watson questions about Jeopardy, I'd bet it could answer > a lot of them. It isn't that it "knows" anything. I don't have any > belief that Watson is conscious or anything like that. Wait a minute...you just got done saying Watson learned all kinds of stuff by reading it. Now you say it doesn't know any of that because it isn't conscious? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Jan 31 17:22:46 2011 From: sparge at gmail.com (Dave Sill) Date: Mon, 31 Jan 2011 12:22:46 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: 2011/1/31 John Clark > There is not one scrap of evidence that a teapot is in orbit around the > planet Uranus and there is not one scrap of evidence that there is not such > a vessel; nevertheless I do not treat both possibilities with equal respect > so I am not a Uranus teapot agnostic. I am a teapot atheist. > We have a winner. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jan 31 17:31:05 2011 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 31 Jan 2011 09:31:05 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: On Mon, Jan 31, 2011 at 8:57 AM, Kelly Anderson wrote: > So when IBM creates a machine with the specific programming task of > "Pass the Turing Test" that won't be intelligence either, because it > was programmed to pass the Turing test... right??? There is reason to believe that the Turing Test can not be passed, without the kind of generality needed for AI (or, more properly, Artificial General Intelligence, which is what people often mean when they mention "true" AI), and that Watson, chessmaster computers, and other specific-feat programs have yet to display. The reason? Talk about one subject. Then talk about something else. A human can handle this - even if they are not an expert in all things (which no human is, though some try to pretend they are). These AIs completely break down. If they were capable of conversing on one topic using limited terms and grammar, they can not form coherent responses on any other topic. Which leads to the interesting question: how, exactly, does one distinguish the best current conversational AIs from humans? It is easy for most people to do (if they are aware that they might be talking to an AI and have been tasked with identifying it), but is the process easy to describe? Among the things I am aware of: 1. Lack of memory. In many cases, the AI won't remember what you said two sentences ago, let alone display human-equivalent medium to long term memory. 2. Inability to learn - which is a consequence of 1. You can not teach one of these AIs even a simple game, in the manner you would conversationally teach an 8 year old. 3. Lack of initiative. Most of these AIs are reactive only. When deprived of outside stimuli, such as a human talking to it, they just sit there and do nothing, as if unaware of the passage of time. (In a human, this would be called "vegetative state", and is one of the criteria used to legally designate a given human body as something to be no longer treated as a full human being unless and until it recovers from that condition - which, in most cases, is seen as effectively impossible due to the causes of that condition.) From kellycoinguy at gmail.com Mon Jan 31 17:34:54 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 10:34:54 -0700 Subject: [ExI] interesting new NOVA episodes In-Reply-To: References: Message-ID: 2011/1/29 Dave Sill : > Speaking of NOVA, did anyone catch "Dogs Decoded"? I thought the whole thing > was fascinating, but two things especially interesting. I really enjoyed that episode as well. > One was an > experiment in which people raised first puppies then wolf cubs. Dogs and > wolves are genetically identical, but as the cubs grew up they didn't bond > with their people and acted like wild animals. Dogs and wolves are genetically similar. If they were identical, they would be clones. What is fair to say is that they are for all practical purposes, still the same species, since they can interbreed. On that basis, it would seem that since there is evidence that humans and Neanderthals interbred, that Neanderthals are "identical" to humans, which I don't think anyone here would agree with. The key I got out of the show wasn't so much that the wolf cubs didn't "bond" with their humans, but rather that they were not able to pick up on the subtle cues that the humans put out and were thus unable to learn proper behavior effectively. Puppies outperformed great apes in responding to pointing and facial cues. So somewhere in the genetic mixing, dogs obtained a mechanism for reading human faces and gestures that is unique to dogs. Wolves don't have that. Other primates don't have that. Only dogs. Very interesting stuff. > The second was the Russian > experiment to domesticate silver foxes by selective breeding. They've been > doing it for 50 years and have bred both a calmer strain and a > more?aggressive-to-humans?one. Again, they're all genetically "the same", Meaning only that they have not created a new species. The amount of Serotonin in the brains of the two groups is off the charts different, and that likely comes from the difference in genetics between the two groups. I would guess that the amount of Serotonin has something to do with the expression of the genes that make floppy ears, tail changes, different coloration of hair and so forth. > but when they have a tame mother raise an aggressive cub, it doesn't calm > down at all. They've also found that as the calm strain becomes more > domesticated, it gets more and more dog-like: shorter tail, curly tail, > white markings, etc. This wasn't in the show... but I heard from other (unverified) sources that the original intention of the breeders who began this program was to create foxes that were easier to handle for fur raising. They were very disappointed when it didn't work out (the colored fur was not acceptable for making fox fur coats) and I think at that point the scientists took over because of the interesting results. > It seems like a likely explanation for these two cases is epigenetics, but > the NOVA episode said nothing about that. I think there are genetic differences sufficient to explain what is going on... it doesn't take very many gene differences to have this sort of thing come out. Occam would probably go with gene drift over epigenetics for most of what you're seeing. OTOH, the friendliness of a specific animal would clearly go up with human handling. Ferrets have to be raised with lots of human handling when they are little, or they grow up to be completely wild. I can say from personal experience that it is much the same with cats. I have a few cats that weren't properly loved as kittens, and they are as wild as the outdoors. I hope some of the foxes escape from their lab into the pet marketplace, they are pretty darn cute. -Kelly From spike66 at att.net Mon Jan 31 17:23:41 2011 From: spike66 at att.net (spike) Date: Mon, 31 Jan 2011 09:23:41 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: <001f01cbc16b$9b556f00$d2004d00$@att.net> ... On Behalf Of Kelly Anderson ... >> That's exactly what Watson was designed and programmed to do. Make a machine with no Jeopardy-specific programming that can be taught through verbal human instruction to play the game, and that machine will almost certainly pass the Turing Test. Watson isn't even close. >...So when IBM creates a machine with the specific programming task of "Pass the Turing Test" that won't be intelligence either, because it was programmed to pass the Turing test... right??? Again, I just don't think anyone has a clue how to define intelligence or consciousness... -Kelly Ja. We are quick to require computers to be able to learn and make inferences before we are willing to define them as intelligent, but I want us to carefully examine every requirement and make what engineers call a RVM or requirements verification matrix. It's a formal way to make sure what we are asking is logically consistent. Some forms of intelligence may not really require very much learning. Immediately I think of my parents' neighbor who turned 90 last month. She is a delightful person, filled with stories of Oregon from the 1920s. She doesn't really learn very much, couldn't really tell you the more recent neighbor's names. She isn't picking up a lot of what I tell her, but that is perfectly OK, for I do a lot more listening than I do talking when she is with us. She is intelligent in that she knows how to construct a good narrative, can respond to questions and so forth. The other day, we were searching the internet for a song from her youth called "Cold Tater and Wait" by Jimmy Dickens. She was amazed to learn Jimmy Dickens is also still living, also aged 90. Consider the computer in front of you right now, and treat the internet as part of that system. It can respond to your questions in a way, has an enormous storehouse of knowledge, beyond the comprehension of the 90 year old neighbor. The internet represents an intelligent being, in a sense. spike From atymes at gmail.com Mon Jan 31 17:38:40 2011 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 31 Jan 2011 09:38:40 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: 2011/1/31 John Clark : > On Jan 30, 2011, at 1:59 PM, Darren Greer wrote: > > I don't see why an agnostic by definition should accord respect to a > > believer or non-believer. > > There is not one scrap of evidence that a teapot is in orbit around the > planet Uranus and there is not one scrap of evidence that there is not such > a vessel; nevertheless I do not treat both possibilities with equal respect > so I am not a Uranus teapot agnostic. I am a teapot atheist. Actually, there are scraps of evidence against. Voyager 2 made a swing by Uranus, and failed to see such a vessel. Further, no telescope-based observations - not even the ones that can detect extrasolar planets by the wobble they impart on their suns - has yet detected a similar gravitational anomaly in Uranus. A teapot in orbit around Uranus could conceivably have escaped such observations, but it is unlikely, and there is no evidence in favor of such a teapot. Furthermore, anyone seriously proposing such a teapot would likely make an observably deliberate effort to make the proposal untestable - and when tests come up to prove or disprove it anyway, the response (possibly preemptive) would be to revise the proposal to invalidate that particular test. From kellycoinguy at gmail.com Mon Jan 31 17:52:22 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 10:52:22 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: >2011/1/31 Dave Sill : > Perhaps, but I don't think it's trivial for a computer to learn it via an > explanation, and the communication, reasoning, problem solving, and > understanding required to do so make it a good test of real intelligence. Of > course, tic-tac-toe is just an example. If I were tasked to conduct a Turing > test I wouldn't use tic-tac-toe, I'd make up a simple game of my own. I think as long as you stuck with board games, where it is known that there are rules and that by playing those rules can be discovered, that you could create a specialty computer program (not a general AI) using today's technology that could learn any arbitrary game. It would not get as good on it's own as the best humans in many cases. In general, computers beat humans when the game tree is small enough, and they suck when the game tree gets unwieldy and there is no good pruning or goodness that is easily discoverable. In other words, if IBM spent as much time and research developing a program to learn to play arbitrary board games as they have on Watson, I think they would come up with something that would be surprisingly good. Again, that's not general AI, just another very small slice. The question is how many very small slices do you need to build up a Strong AI?? The answer seems to be "all of them" that humans have, and while that is an interesting answer that leads directly to machine learning, is it a useful answer? In other words, is what IBM is doing with Watson useful? Damn right it is. >> Now, if you want a challenge for a computer, try the oriental board >> game Go. As far as I know, there aren't any computers that can grok >> that as good as people yet. I'm sure it's coming soon though. :-) > > No doubt. And it'll be impressive. But it'll still just be a Go computer and > not generally intelligent. In all probability that is correct. >> >> I think the problem is really related to the definition of >> intelligence. Nobody has really defined it... > > Wikipedia has a pretty good one: > ??"Intelligence is an umbrella term describing a property of the mind > including related abilities, such as the capacities for abstract thought, > understanding, communication, reasoning, learning, learning from past > experiences, planning, and problem solving." By this definition, a computer will never have intelligence because someone will say, But the computer doesn't have a "mind". It's all a bit circular. I have seen individual computer programs that exhibit all of the characteristics (one at a time) in that list, but I wouldn't consider any of them intelligent, except over a very limited domain. >> ... so the definition seems to >> fall out as "Things people do that computers don't do yet." > > I disagree. Show me a computer that meets the above definition of > intelligence at an average human level. There isn't one. But in 2060 when there is a computer that meets and exceeds the above definition on every measurable level and by every conceivable test, there will still be people (maybe not you, but some people) who will say, but it's all just an elaborate parlor trick. The computer isn't REALLY intelligent. In my experience, anything that escapes AI gets a new name. Pattern recognition, computer vision, natural language processing, optical character recognition, facial recognition, etc. etc. So that for all practical purposes AI is forever the stuff we don't know how to do very well yet. The first computer that passes the Turing test (and I'm sure there are weaker and stronger forms of the Turing test) will no doubt have a technology with a name, and that name will probably not be "artificial intelligence"... >> So what is >> "Things computers do that people can't do"? Certainly it is not ALL >> trivial stuff. For example, using genetic algorithms, computers have >> designed really innovative jet engines that no people ever considered. >> Is that artificial intelligence (i.e. the kind people can't do?) > > You mean that people have designed and used programs with genetic algorithms > to create innovative designs. Or did a computer wake up one day and say > "hey, I've got wicked new idea for a jet engine!"? As you are aware, computers are not self aware or self directed at this point. My argument is not that computers have already achieved artificial intelligence, just that they show a glimmer of hope in the area, and that they have done things that people haven't, even in the area of "creativity" and "art" where people are supposed to be the masters of the domain. Do the computer programs that generate new compositions in the style of (insert your favorite classical composer here) have artificial intelligence in that area? Or is it just another technology that has escaped AI and gotten a new name? -Kelly From stefano.vaj at gmail.com Mon Jan 31 18:04:27 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 31 Jan 2011 19:04:27 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits In-Reply-To: <4D42C236.2020203@lightlink.com> References: <4D42C236.2020203@lightlink.com> Message-ID: On 28 January 2011 14:18, Richard Loosemore wrote: > We are a long way away from AGI, unless people start to wake up to the > farcical state of affairs in artificial intelligence at the moment. > Indeed, even though I would say "the farcical state of affairs in fundamental research of all kinds" (why, technologies of military relevance - including military AI applications - being somewhere a limited exception in this respect, but certainly not in Europe) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Mon Jan 31 18:08:58 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 11:08:58 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: 2011/1/31 Dave Sill : > On Mon, Jan 31, 2011 at 11:57 AM, Kelly Anderson > wrote: >> >> So when IBM creates a machine with the specific programming task of >> "Pass the Turing Test" that won't be intelligence either, because it >> was programmed to pass the Turing test... right??? > > Wrong, because the Turing test is designed to test general intelligence. > And there's no "pass the Turing test". It's not like the SATs, there's not > one single Turing test that, if passed, grants an AI a certificate of > intelligence. But an AI that accumulates a record in various Turing tests > against different interviewers equivalent to its human competitors would > demonstrate human-equivalent intelligence. This goes to the weak/strong Turing test. Fooling someone who doesn't know they are administering a Turing test is the weakest form. They just want an answer, for example, to a technical problem. There is no doubt that some programs sometimes pass this weakest form of the Turing test already. The strongest Turing test is when someone who knows a lot about natural language processing and it's weaknesses can't distinguish over a long period of time the difference between a number of humans, and a number of independently trained Turing computers. There are, of course, a number of intermediate forms. So when people say "pass the Turing test" it is a lot like saying "pass the SAT". What does that mean? With the SAT, it's good enough to get admitted to (School of your choice). So perhaps I suggest a new test. If a computer is smart enough to get admitted into Brigham Young University, then it has passed the Anderson Test of artificial intelligence. Is that harder or easier than the Turing test? How about smart enough to graduate with a BS from BYU? Another test... suppose that I subscribed an artificial intelligence program to this list. How long would it take for you to figure out that it wasn't human? That's a bit easier, since you don't have to do the processing in real time as with a chat program. >> Again, I just don't think anyone has a clue how to define intelligence >> or consciousness. > > Intelligence is pretty straightforward. See Wikipedia. What does > consciousness have to do it, though? I suppose that's just another emergent aspect of the human brain. There seems to be a supposition by some (not me) that to be intelligent, consciousness is a prerequisite. >> >> > The key is learning and understanding. It doesn't matter if it's a man >> > or a >> > machine, or if the machine is using one or more clever tricks. A machine >> > that plays one game brilliantly but has no ability to learn other games >> > isn't intelligent. >> >> The right question here seems to me to be "Does Watson Learn?" >> Everything I have read seems to indicate that Watson knows answers to >> questions because Watson has processed a huge amount of free text from >> the Internet or perhaps Wikipedia or something. The point is that >> nobody sat down and programmed Watson to answer specific questions. >> This seems like "learning" by "reading" to me, and if so, that is a >> tremendous new capability (at least at this level of utility) for >> computers. > > It's learning in the sense that Google "learns" what's on the web by sucking > down a copy of it. OK, it's a little more sophisticated than that since it > has to do some parsing. That's the difference between taking a picture, and telling you what is in the picture. HUGE difference... this is not a "little" more sophisticated. > But does Watson learn from its mistakes? Does it > learn from its opponent's successes? I don't know. Does it understand > anything? I doubt it. Once again, we run into another definition issue. What does it mean to "understand"? In my mind, when I understand something, I am consciously aware that I have mastery of a fact. This presupposes consciousness. So is there some weaker form of "understanding" that is acceptable without consciousness? And if that form is such that I can use it for future computation, to say answer a question, then Watson does understand it. Yes. So by some definitions of "understand" yes, Watson understands the text it has read. >> >> If you asked Watson questions about Jeopardy, I'd bet it could answer >> a lot of them. It isn't that it "knows" anything. I don't have any >> belief that Watson is conscious or anything like that. > > Wait a minute...you just got done saying Watson learned all kinds of stuff > by reading it. Now you say it doesn't know any of that because it isn't > conscious? Ok, my bad. I got sloppy in my wording here. The "knows" is in quotes because when I "know" something, I am consciously aware of my knowledge of it. When a computer "knows" something, that is a lesser form of "knowing". If you say Watson knows that 'Sunflowers' was painted by 'Van Gogh', then on that level of knowing, Watson does know things. It just doesn't know that it knows it in the sense of conscious knowing. Maybe this still doesn't make total sense, this is hard stuff to define and talk intelligently about. -Kelly From jonkc at bellsouth.net Mon Jan 31 18:03:13 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 31 Jan 2011 13:03:13 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: On Jan 31, 2011, at 12:38 PM, Adrian Tymes wrote: > Voyager 2 made a swing by Uranus, and failed to see such a vessel. Don't be silly, the probability of Voyager spotting such a teapot even if it were there is virtually zero. > Furthermore, anyone seriously proposing such a teapot would > likely make an observably deliberate effort to make the proposal untestable Untestable, hmm, you mean like God planting pre-aged dinosaur bones in the ground 6000 years ago to fool us? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Jan 31 17:55:08 2011 From: jonkc at bellsouth.net (John Clark) Date: Mon, 31 Jan 2011 12:55:08 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> On Jan 31, 2011, at 12:31 PM, Adrian Tymes wrote: > Talk about one subject. Then talk about something else. A human can handle this - even if they are not an expert in all things (which no human is, though some try to pretend they are). These AIs completely break down. Until now it was true that AI programs were very brittle, but that's why I was so impressed with Watson, its knowledge base is so vast and it's so good at finding the appropriate information from even vague poorly phrased input that with only a few modifications you could make a program that could speak about anything and do so intelligently enough not to be embarrassing. Of course I'm not saying it would always speak brilliantly, if it did that it would be a dead giveaway that's its not human and fail the Turing Test. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jan 31 18:40:05 2011 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 31 Jan 2011 10:40:05 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> Message-ID: 2011/1/31 John Clark : > Watson, its knowledge base is so vast and it's so good > at finding the appropriate information from even vague poorly phrased > input?that with only a few modifications you could make a program that could > speak about anything and do so intelligently enough not to be embarrassing. Do you seriously believe this? Yes, it can handle on the fly research. But: 1) Can it remember things it was told, and form its own knowledge base from that, for things it can not research? 2) Can it reason based on the information, or do anything more than match keywords? 3) Can it take initiative, and supply information even when it is not asked something? (Often times, research is as much about finding the right question as about finding the answer to it. And yes, I know Jeopardy has answers that are replied to with questions. You know what I mean.) From kellycoinguy at gmail.com Mon Jan 31 18:46:35 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 11:46:35 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: On Mon, Jan 31, 2011 at 10:31 AM, Adrian Tymes wrote: > On Mon, Jan 31, 2011 at 8:57 AM, Kelly Anderson wrote: >> So when IBM creates a machine with the specific programming task of >> "Pass the Turing Test" that won't be intelligence either, because it >> was programmed to pass the Turing test... right??? > > There is reason to believe that the Turing Test can not be passed, without > the kind of generality needed for AI (or, more properly, Artificial General > Intelligence, which is what people often mean when they mention "true" > AI), and that Watson, chessmaster computers, and other specific-feat > programs have yet to display. Clearly, to pass a stronger Turing test, a computer would have to have greater capabilities than Watson. That being said, I don't know how many components would have to be added to Watson to get there. My sense is that Watson is more than half way there. An interesting aspect of passing the Turing test is for a computer to pretend NOT to know some things. If you ran into a person who's knowledge was TOO encyclopedic, you might get suspicious. Sometimes I wonder if Ken Jennings is human... :-) It might be a bit difficult to tell the difference between Ken and Watson if your chat were in the form of Jeopardy questions... > The reason? ?Talk about one subject. ?Then talk about something else. ?A > human can handle this - even if they are not an expert in all things (which > no human is, though some try to pretend they are). ?These AIs completely > break down. ?If they were capable of conversing on one topic using limited > terms and grammar, they can not form coherent responses on any other > topic. If you can program a convincing dialog within a domain, then you can go a long way with more memory, more programming and more processing power. There is of course more to it than that because of cross domain issues... > Which leads to the interesting question: how, exactly, does one distinguish > the best current conversational AIs from humans? ?It is easy for most people > to do (if they are aware that they might be talking to an AI and have been > tasked with identifying it), but is the process easy to describe? I would guess that you could come up with algorithms that MIGHT confuse most conversational programs, but it would be a kind of arms race between the writers of such algorithms and the programmers of the conversational program. If I were going to try and catch a would be Turing intelligent machine, I would probably start by telling jokes, then asking if it was funny, then asking why it was funny. I think that will be a pretty hard domain for most computer programs for a some time to come. My reason for thinking so is that even people learning foreign languages have a tremendously difficult time with humor in the new language. Of course, this might result in false negatives with Indian tech support staff... > Among the things I am aware of: > 1. Lack of memory. ?In many cases, the AI won't remember what you said > two sentences ago, let alone display human-equivalent medium to long > term memory. ELIZA (circa 1970ish) has a very weak memory... going back one or maybe two sentences. The Ebay support girl has a memory. You start getting smart with her, and she stays huffy for a while. There's some emotional momentum with her. I got into it with her one night, and we had a raging argument. I tried asking a question at the end, and she was still mad. It was really quite fun. :-) She even claims to have a boyfriend... if you get her in the right mood. That has nothing to do with Ebay support, but she apparently has quite a back story. > 2. Inability to learn - which is a consequence of 1. ?You can not teach one > of these AIs even a simple game, in the manner you would conversationally > teach an 8 year old. I believe Watson, by nearly any definition, has the capacity to learn. It can only learn some kinds of things, but there is clearly learning going on there. At least I would call that learning. As I said in another email, it would not take more work than programming Watson to program a computer to learn arbitrary board games. That is well within the capacity of this generation of AIs, IMHO, even if it hasn't yet been programmed. > 3. Lack of initiative. ?Most of these AIs are reactive only. ?When deprived of > outside stimuli, such as a human talking to it, they just sit there and do > nothing, as if unaware of the passage of time. ?(In a human, this would be > called "vegetative state", and is one of the criteria used to legally designate > a given human body as something to be no longer treated as a full human > being unless and until it recovers from that condition - which, in most cases, > is seen as effectively impossible due to the causes of that condition.) See now we're going beyond "what is intelligence" to "what is human"... so the target moves again... :-) Time slicing computers are much better at using their spare cycles than any human being. Programming an AI to have initiative would be one of the easiest things to do. You would simply have to give it a set of goals. People don't have initiative, they just like eating, breathing, drinking (sometimes to excess), learning, etc. Set up a computer with goals (solve the problem of world hunger) and you would probably see more initiative than you would get from a Washington page. If peace is ever achieved in the Middle East, it will probably be negotiated by an AI. Being self directed is similarly easy. You just give the computer a lot of goals to choose from, then instruct it to pick the best goal, and work towards it for a while, then if it is not satisfied that it is getting good results, go back and pick another goal. Initiative is one of the easiest things to program. Those Google spiders have plenty of initiative... :-) -Kelly From hkeithhenson at gmail.com Mon Jan 31 18:53:15 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 31 Jan 2011 11:53:15 -0700 Subject: [ExI] atheists declare religions as scams. Message-ID: On Mon, Jan 31, 2011 at 5:00 AM, Eugen Leitl wrote: > > On Sun, Jan 30, 2011 at 05:31:17PM -0700, Keith Henson wrote: >> snip >> >> I think atheists would be much better off to try to understand why (in >> an evolutionary sense) humans have religions at all. > > http://postbiota.org/pipermail/tt/2010-December/008311.html > > referencing http://www.springerlink.com/content/m0v73485k8t58571/ It's interesting, and I don't doubt their data, but this isn't the origin of religions. Even if there is a correlation between reproduction and religiosity today, correlation is not causation, and in this case, it can't be. The divergence between the reproduction of the religious and the not religious can't be more then single digit generations. Three to four generations back virtually everyone who got married at all had a bunch of kids. The rules for invoking EP require that there be both selection pressure and time for evolved psychological traits to emerge. The stronger the selection pressure, the faster the traits get selected. But even in the case of the tamed foxes, it took 8 generations to get even partly tame foxes. Gregory Clark makes a case that it took 24 generations of very strong selection for the psychological traits needed for the industrial revolution to become common. So the current correlation between reproduction and religiosity has to depend on something deeper and much older. I make a case that the human capacity for religion came from the evolved mechanisms that turn wars on and off as environmental conditions make wars more or less desirable from the viewpoint of genes. But you knew that. Keith >> I make one case, perhaps there are better explanations. >> >> Keith >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A ?7779 75B0 2443 8B29 F6BE > > > ------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > End of extropy-chat Digest, Vol 88, Issue 51 > ******************************************** > From sparge at gmail.com Mon Jan 31 18:47:11 2011 From: sparge at gmail.com (Dave Sill) Date: Mon, 31 Jan 2011 13:47:11 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: On Mon, Jan 31, 2011 at 12:52 PM, Kelly Anderson wrote: > > The question is how many very small slices do you need to build up a > Strong AI?? The answer seems to be "all of them" that humans have, and while that is an interesting answer that leads directly to machine > learning, is it a useful answer? In other words, is what IBM is doing > with Watson useful? Damn right it is. > Of course it is. And it's far more useful than a chess computer. > Wikipedia has a pretty good one: > > "Intelligence is an umbrella term describing a property of the mind > > including related abilities, such as the capacities for abstract thought, > > understanding, communication, reasoning, learning, learning from past > > experiences, planning, and problem solving." > > By this definition, a computer will never have intelligence because > someone will say, But the computer doesn't have a "mind". It's all a > bit circular. No, that definition says intelligence is a property of the mind (because that's the only place we've observed it so far), but whether an AI has a mind or not is different question. > I have seen individual computer programs that exhibit > all of the characteristics (one at a time) in that list, but I > wouldn't consider any of them intelligent, except over a very limited > domain. > Exactly, and that's the key. We've seen single-purpose systems that act intelligently, but they're not general and they're not intelligent by, e.g., the Wikipedia definition. > > I disagree. Show me a computer that meets the above definition of > > intelligence at an average human level. > > There isn't one. But in 2060 when there is a computer that meets and > exceeds the above definition on every measurable level and by every > conceivable test, there will still be people (maybe not you, but some > people) who will say, but it's all just an elaborate parlor trick. The > computer isn't REALLY intelligent. > So what? In my experience, anything that escapes AI gets a new name. Pattern > recognition, computer vision, natural language processing, optical > character recognition, facial recognition, etc. etc. Right, because those are all very specific skills. > So that for all > practical purposes AI is forever the stuff we don't know how to do > very well yet. > Until we get to the point that we can assemble a system that is intellectually equivalent to a human. The first computer that passes the Turing test (and I'm sure there are > weaker and stronger forms of the Turing test) will no doubt have a > technology with a name, and that name will probably not be "artificial > intelligence"... > No, it'll be a brand name. :-) Do the computer programs that generate new compositions in the style > of (insert your favorite classical composer here) have artificial > intelligence in that area? Or is it just another technology that has > escaped AI and gotten a new name? I don't think they have real intelligence, artificial or otherwise. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Mon Jan 31 18:28:32 2011 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 31 Jan 2011 10:28:32 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: 2011/1/31 John Clark : > On Jan 31, 2011, at 12:38 PM, Adrian Tymes wrote: > >?Voyager 2 made a swing by?Uranus, and failed to see such a vessel. > > Don't be silly, the probability of Voyager spotting such a teapot even if it > were there is virtually zero. I'm not so sure. It depends on how close Voyager passed, and they do pick up a lot of details with repeated analysis. But the telescope observations are stronger evidence than Voyager in any case. > > Furthermore, anyone seriously proposing such a teapot would > > likely make an observably deliberate effort to make the proposal untestable > > Untestable, hmm, you mean like God planting pre-aged dinosaur bones in the > ground 6000 years ago to fool us? Yes, that kind of thing. (And, even if that were the case, that would be first party evidence of intent that God does not want us to believe, strong enough to override third party evidence of people saying we should believe anyway. Notice how quickly this contention is backed away from when its proponents are shown that they are asking us to defy their own God's will.) From spike66 at att.net Mon Jan 31 18:45:18 2011 From: spike66 at att.net (spike) Date: Mon, 31 Jan 2011 10:45:18 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: <005601cbc177$0256e7f0$0704b7d0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark . Untestable, hmm, you mean like God planting pre-aged dinosaur bones in the ground 6000 years ago to fool us?...John K Clark No that isn't the leading young-earther creationist theory John. The theory is that humans carved the dinosaur skeletons out of stone and assembled them as idols. Soon god saw this was a bad idea: some of them might somehow be preserved, and way down in history, humans would find them, and think they were the skeletons of ancient extinct beasts. This would lead them to invent evolutionary theory and become infidels. So, he made a rule against them, recorded in Exodus 20: 4-6. So important was this rule against carving dinosaurs and other extinct life forms, that the rule against them made the big ten; in fact it was the first of the ten commandments (or first and second depending on how you count them) right up there in importance with thou shalt not kill and thou shalt not steal. So it wasn't god who put those bones in the ground, it was humans who made them, then they were buried and survived several millennia. The old timers carving those dinosaur skeletons has caused most of our world to go all infidel. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Mon Jan 31 19:05:08 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 31 Jan 2011 14:05:08 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> Message-ID: <4D4707E4.3000106@lightlink.com> Kelly Anderson wrote: > On Fri, Jan 28, 2011 at 9:01 AM, Richard Loosemore wrote: >> John Clark wrote: >>> http://www.youtube.com/watch?v=WFR3lOm_xhE >> Yes, but do you have any idea how trivial this is? > > Trivial!?! This is the final result of decades of research in both > software and hardware. Hundreds of thousands of man hours have gone > into the projects that directly led to this development. Trivial! You > have to be kidding. The subtle language cues that are used on Jeopardy > are not easy to pick up on. This is a really major advance in AI. I > personally consider this to be a far more impressive achievement than > Deep Blue learning to play chess. I stand by my statement that what Watson can do is "trivial". You are wildly overestimating Watson's ability to handle "subtle language cues". It is being asked a direct factual question (so, no need for Watson to categorize the speech into the dozens or hundreds of subtle locution categories that a human would have to), and there is also no need for Watson to try to gauge the speaker's intent on any of the other levels at which communication usually happens. Furthermore, Watson is unable (as far as I know) to deploy its knowledge in such a way as to learn any new concepts just by talking, or answer questions that involve mental modeling of situations, or abstractions. For example, I would bet that if I ask Watson: "If I have a set of N balls in a bag, and I pull out the same number of balls from the bag as there are letters in your name, how many balls would be left in the bag?" It would be completely unable to answer. > Richard, do you think computers will achieve Strong AI eventually? Kelly, by my reckoning I am one of only a handful of people on this planet with the ability to build a strong AI, and I am actively working on the problem (in between teaching, fundraising, and writing to the listosphere). Richard Loosemore From darren.greer3 at gmail.com Mon Jan 31 19:04:18 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 31 Jan 2011 15:04:18 -0400 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: >There is not one scrap of evidence that a teapot is in orbit around the planet Uranus and there is not one scrap of evidence that there is not such a vessel; nevertheless I do not treat both possibilities with equal respect so I am not a Uranus teapot agnostic.< A teapot agnostic says "I don't have enough data to determine whether the teapot exists so I can't form an opinion." And by the way," he adds,"neither do you, so don't come around here with your teapot notions and waste my time." See my point? Respect has nothing to do with it. You allot each position as a possibility, but that's not the same thing as respecting those who hold that position. It would be quite natural to be agnostic and think that both belief or non-belief in the teapot and the time spent considering it was irrational and a monumental waste of time, energy and human brainpower precisely because there wasn't enough data to support the argument either way. Darren 2011/1/31 John Clark > On Jan 30, 2011, at 1:59 PM, Darren Greer wrote: > > I don't see why an agnostic by definition should accord respect to a > believer or non-believer. > > > There is not one scrap of evidence that a teapot is in orbit around the > planet Uranus and there is not one scrap of evidence that there is not such > a vessel; nevertheless I do not treat both possibilities with equal respect > so I am not a Uranus teapot agnostic. I am a teapot atheist. > > John K Clark > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Mon Jan 31 19:13:47 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 31 Jan 2011 14:13:47 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits In-Reply-To: References: <4D42C236.2020203@lightlink.com> Message-ID: <4D4709EB.1040005@lightlink.com> Stefano Vaj wrote: > On 28 January 2011 14:18, Richard Loosemore > wrote: > > We are a long way away from AGI, unless people start to wake up to > the farcical state of affairs in artificial intelligence at the moment. > > > Indeed, even though I would say "the farcical state of affairs in > fundamental research of all kinds". I agree. There is a serious malaise in the world of science. Richard Loosemore From atymes at gmail.com Mon Jan 31 19:11:07 2011 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 31 Jan 2011 11:11:07 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: <005601cbc177$0256e7f0$0704b7d0$@att.net> References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> <005601cbc177$0256e7f0$0704b7d0$@att.net> Message-ID: 2011/1/31 spike : > No that isn?t the leading young-earther creationist theory John.? The theory > is that humans carved the dinosaur skeletons out of stone and assembled them > as idols.? Soon god saw this was a bad idea: some of them might somehow be > preserved, and way down in history, humans would find them, and think they > were the skeletons of ancient extinct beasts. How do - or do, at all - the creationists deal with the fact that the skeletons aren't, y'know, stone? Stone and fossilized bone aren't hard to tell apart, and it's not like people back then knew how to make a single large bone from many smaller ones when working with actual bone. (Even today, it'd be a trick, though probably doable by certain labs.) From kellycoinguy at gmail.com Mon Jan 31 19:23:32 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 31 Jan 2011 12:23:32 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> Message-ID: On Mon, Jan 31, 2011 at 11:40 AM, Adrian Tymes wrote: > 2011/1/31 John Clark : >> Watson, its knowledge base is so vast and it's so good >> at finding the appropriate information from even vague poorly phrased >> input?that with only a few modifications you could make a program that could >> speak about anything and do so intelligently enough not to be embarrassing. > > Do you seriously believe this? If it is true, I believe IBM will do it. I can't believe that they would NOT work on this next. > Yes, it can handle on the fly research. ?But: > > 1) Can it remember things it was told, and form its own knowledge base from > that, for things it can not research? It would be trivial for Watson to add new text to its system. So if you "told" it something, it would not be difficult at all for it to add that text to its database. This seems very straightforward and (dare I use the word) almost trivial. One key aspect of Watson is that while it is playing, it can not access the Internet. > 2) Can it reason based on the information, or do anything more than match > keywords? It is doing FAR MORE than matching keywords. Finding key words is clearly part of what it's doing, but it also clearly has some form of inference engine. In one IBM video I watched, it was explained that for every alternative way a vague sentence could be parsed, Watson parsed it in ALL of those ways creating a search tree for answers along all of the paths of innuendo, "pun"ishment, and other bizarre things that Jeopardy does with the language. Many of these search trees contained hundreds if not thousands of alternative explanations as to what was being asked. It then evaluates the answers on each of those paths, and if it has high enough confidence in its' answer, it pushes the buzzer. All of this accomplished in three seconds by thousands of processors. > 3) Can it take initiative, and supply information even when it is not asked > something? ?(Often times, research is as much about finding the right question > as about finding the answer to it. ?And yes, I know Jeopardy has answers that > are replied to with questions. ?You know what I mean.) It is apparently programmed to chat with the host. And indeed, all of the searching above is about finding the right question. Adding initiative to Watson would not be difficult, it just needs goals. I can imagine programming with Watson using natural language in a kind of Prolog style... that would be fun. Kelly:"Widely recognized as the best restaurant in Orem, UT." Watson:"What is the Thai Chilli Garden" Kelly:"...Serving hamburgers" Watson:"What is Crown Burger" Kelly:"Address of..." Watson:"What is 448 N 500W" -Kelly From sparge at gmail.com Mon Jan 31 19:30:10 2011 From: sparge at gmail.com (Dave Sill) Date: Mon, 31 Jan 2011 14:30:10 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: On Mon, Jan 31, 2011 at 1:08 PM, Kelly Anderson wrote: > > > The strongest Turing test is when someone who knows a lot about > natural language processing and it's weaknesses can't distinguish over > a long period of time the difference between a number of humans, and a > number of independently trained Turing computers. > No, language processing is only one aspect of intelligence. The strongest Turing test would also measure the ability to learn, to learn from past experiences, to plan, to solve problems...all of the things the Wikipedia definition mentions, and maybe more. So perhaps I suggest a new test. If a computer is smart enough to get > admitted into Brigham Young University, then it has passed the > Anderson Test of artificial intelligence. You mean achieve an SAT score sufficient to get into BYU? Or do you mean that it has to go through school or take a GED, fill out an application to BYU, etc. like a human would have to do? > Is that harder or easier than the Turing test? Depends on the Turing test, I'd say. How about smart enough to graduate with a BS from BYU? > How about it? It'd be an impressive achievement. > Another test... suppose that I subscribed an artificial intelligence > program to this list. How long would it take for you to figure out > that it wasn't human? That's a bit easier, since you don't have to do > the processing in real time as with a chat program. > Depends how active it is, what it writes, and whether anyone is clued to the fact that there's a bot on the list. A Watson-like bot that answers questions occasionally could be pretty convincing. But it'd fall apart if anyone tried to engage it in a discussion. I suppose that's just another emergent aspect of the human brain. > There seems to be a supposition by some (not me) that to be > intelligent, consciousness is a prerequisite. > OK, then let's leave it out for now because I don't think it's necessary, either. That's the difference between taking a picture, and telling you what > is in the picture. HUGE difference... this is not a "little" more > sophisticated. > No, parsing a sentence into parts of speech is not hugely sophisticated. > Once again, we run into another definition issue. What does it mean to > "understand"? http://en.wikipedia.org/wiki/Understanding In my mind, when I understand something, I am > consciously aware that I have mastery of a fact. This presupposes > consciousness. So is there some weaker form of "understanding" that is > acceptable without consciousness? It's not necessarily weaker to leave consciousness out of. > And if that form is such that I can > use it for future computation, to say answer a question, then Watson > does understand it. Yes. So by some definitions of "understand" yes, > Watson understands the text it has read. > Granted, at a trivial level Watson could be said to understand the data it's incorporated. But it doesn't have human-level understanding of it. Ok, my bad. I got sloppy in my wording here. The "knows" is in quotes > because when I "know" something, I am consciously aware of my > knowledge of it. When a computer "knows" something, that is a lesser > form of "knowing". If you say Watson knows that 'Sunflowers' was > painted by 'Van Gogh', then on that level of knowing, Watson does know > things. It just doesn't know that it knows it in the sense of > conscious knowing. Maybe this still doesn't make total sense, this is > hard stuff to define and talk intelligently about. Just leave consciousness out of it. It's irrelevant. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jan 31 19:46:33 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 31 Jan 2011 13:46:33 -0600 Subject: [ExI] AGI (and other) IQ test In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> Message-ID: <4D471199.9070402@satx.rr.com> See which links to Jos? Hern?ndez-Oralloa, *, E-mail The Corresponding Author and David L. Doweb, E-mail The Corresponding Author a Departament de Sistemes Inform?tics i Computaci?, Universitat Polit?cnica de Val?ncia, Cam? de Vera s/n, E-46022, Val?ncia, Spain b Computer Science & Software Engineering, Clayton School of I.T., Monash University, Clayton, Victoria, 3800, Australia Received 16 December 2009; revised 24 September 2010; accepted 24 September 2010. Available online 29 September 2010. Abstract In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms ?universal? and ?anytime? is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early ?compression tests? and the more recent definition of ?universal intelligence? in order to design new ?universal intelligence tests?, where a feasible implementation has been a design requirement. One of these tests is the ?anytime intelligence test?, which adapts to the examinee's level of intelligence in order to obtain an intelligence score within a limited time. From rpwl at lightlink.com Mon Jan 31 20:09:59 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 31 Jan 2011 15:09:59 -0500 Subject: [ExI] AGI (and other) IQ test In-Reply-To: <4D471199.9070402@satx.rr.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> <4D471199.9070402@satx.rr.com> Message-ID: <4D471717.9070008@lightlink.com> Damien Broderick wrote: > See Allow me to paraphrase the research (based on the abstract): "We don't know how to define 'intelligence' so we will substitute the definition of something else - a parameter extracted from the behavior of simple function, when that function is applied to an infinite set of parallel universes - and then stick the label 'intelligence' on this parameter and hope you don't notice that it has nothing to do with the meaning of the same word, when used by ordinary people. Next we build a test for this thing that we call 'intelligence', but since a proper test would require access to infinite sets of parallel universes, we will just truncate it what we think is a good enough way, and call it done" If you like drinking hogswash you'll enjoy this vintage. Richard Loosemore > which links to > > Jos?? Hern??ndez-Oralloa, *, E-mail The Corresponding Author and David > L. Doweb, E-mail The Corresponding Author > > a Departament de Sistemes Inform? tics i Computaci??, Universitat > Polit??cnica de Val??ncia, Cam?? de Vera s/n, E-46022, Val??ncia, Spain > > b Computer Science & Software Engineering, Clayton School of I.T., > Monash University, Clayton, Victoria, 3800, Australia > Received 16 December 2009; > revised 24 September 2010; > accepted 24 September 2010. > Available online 29 September 2010. > > Abstract > > In this paper, we develop the idea of a universal anytime intelligence > test. The meaning of the terms ???universal??? and ???anytime??? is > manifold here: the test should be able to measure the intelligence of > any biological or artificial system that exists at this time or in the > future. It should also be able to evaluate both inept and brilliant > systems (any intelligence level) as well as very slow to very fast > systems (any time scale). Also, the test may be interrupted at any time, > producing an approximation to the intelligence score, in such a way that > the more time is left for the test, the better the assessment will be. > In order to do this, our test proposal is based on previous works on the > measurement of machine intelligence based on Kolmogorov complexity and > universal distributions, which were developed in the late 1990s (C-tests > and compression-enhanced Turing tests). It is also based on the more > recent idea of measuring intelligence through dynamic/interactive tests > held against a universal distribution of environments. We discuss some > of these tests and highlight their limitations since we want to > construct a test that is both general and practical. Consequently, we > introduce many new ideas that develop early ???compression tests??? and > the more recent definition of ???universal intelligence??? in order to > design new ???universal intelligence tests???, where a feasible > implementation has been a design requirement. One of these tests is the > ???anytime intelligence test???, which adapts to the examinee's level of > intelligence in order to obtain an intelligence score within a limited > time. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Mon Jan 31 19:57:31 2011 From: spike66 at att.net (spike) Date: Mon, 31 Jan 2011 11:57:31 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D4707E4.3000106@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> Message-ID: <008601cbc181$18eb0af0$4ac120d0$@att.net> >... On Behalf Of Richard Loosemore ... >For example, I would bet that if I ask Watson: >"If I have a set of N balls in a bag, and I pull out the same number of balls from the bag as there are letters in your name, how many balls would be left in the bag?" It would be completely unable to answer. Richard Loosemore Well sure, but Richard, there is an appalling fraction of humanity which would fail that test. My question is more pragmatic. Let us not worry for now about having created intelligence, but rather the more practical and pragmatic question: How far are we from creating software that will serve as an adequate companion for the partially impaired elderly person? Part of the reason the elderly decline mentally so quickly is that in many if not most cases they suffer terribly from boredom and loneliness. We busy younger people do what we can, but most of their companionship is from others in a similar situation. Often they hold back from building strong friendships because they know they may have to bury that friend soon. I can imagine about the third or fourth eulogy they deliver they give up, say to hell with this, I will watch Jeopardy on TV, Alex Trebec will likely outlive me. Having parents rapidly approaching old age focuses my own mind on this problem. We need machines which can tirelessly carry on stimulating conversation. With the Watson experiment, I feel we are getting tantalizingly close to that now. spike From spike66 at att.net Mon Jan 31 20:14:02 2011 From: spike66 at att.net (spike) Date: Mon, 31 Jan 2011 12:14:02 -0800 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> <005601cbc177$0256e7f0$0704b7d0$@att.net> Message-ID: <008c01cbc183$6787f130$3697d390$@att.net> >... On Behalf Of Adrian Tymes ... Subject: Re: [ExI] Fw: Re: atheists declare religions as scams. 2011/1/31 spike : >> No that isn?t the leading young-earther creationist theory John.? The >> theory is that humans carved the dinosaur skeletons out of stone and >> assembled them as idols.? Soon god saw this was a bad idea: some of >> them might somehow be preserved, and way down in history, humans would >> find them, and think they were the skeletons of ancient extinct beasts. >...How do - or do, at all - the creationists deal with the fact that the skeletons aren't, y'know, stone? Stone and fossilized bone aren't hard to tell apart, and it's not like people back then knew how to make a single large bone from many smaller ones when working with actual bone. (Even today, it'd be a trick, though probably doable by certain labs.) spike You ask too many questions, young man. It's turtles all the way down. Creationist theory doesn't work for chemistry hipsters, physicists, observant beast watchers, that sort. If one is all the above, well too bad for that one, the flaming infidel. Interesting experience: many years ago I was in the Petrified Forest museum in Arizona with my wife's parents, both hard core creationists. There was one dinosaur skeleton in there, a hadrosaur, not so huge, smaller than an elephant. My father in law was explaining to me the notion of the ancient idol makers carving these things out of stone. I looked carefully at that skeleton, as closely as the rail would allow. The heel bones of that fossil still had the attachment scars where the Achilles tendon attaches to the bone. That marvelous fossil was preserved well enough to still see that! I commented how marvelous it was that the artisans who crafted this idol paid close enough attention to detail that they would carve an insertion scar on a bone. Upon my pointing to this, he fled. My mother in law would not venture anywhere near that dinosaur skeleton, refused to go even within 20 meters of the beast. Wouldn't even look at it! I spent nearly an hour studying that one exhibit. The others had their lunch while waiting for me to tear myself away from that skeleton. You would be amazed at how much you can learn if you really study the hell out of a dinosaur skeleton. Those are very talkative fossils. Many of you fly in and out of Chicago OHare. Next time you are there, stop in terminal 1 just past the proctological exam, and really study that Brachiosaurus. Don't just gawk like a prairie chicken, study it! STUDY! Then come back and report what you learned please. spike From rpwl at lightlink.com Mon Jan 31 20:42:12 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 31 Jan 2011 15:42:12 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <008601cbc181$18eb0af0$4ac120d0$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> Message-ID: <4D471EA4.7080900@lightlink.com> spike wrote: >> ... On Behalf Of Richard Loosemore > ... >> For example, I would bet that if I ask Watson: > >> "If I have a set of N balls in a bag, and I pull out the same number of > balls from the bag as there are letters in your name, how many balls would > be left in the bag?" It would be completely unable to answer. Richard > Loosemore > > > Well sure, but Richard, there is an appalling fraction of humanity which > would fail that test. > > My question is more pragmatic. Let us not worry for now about having > created intelligence, but rather the more practical and pragmatic question: > > How far are we from creating software that will serve as an adequate > companion for the partially impaired elderly person? > > We need machines which can tirelessly carry on stimulating > conversation. With the Watson experiment, I feel we are getting > tantalizingly close to that now. But that is *exactly* my point. We are not getting tantalizingly close, we are just doing the same old snake-oil con trick of building a system that works in a ridiculously narrow domain, and which impresses some people with the sheer breadth of information it stores inside it. Watson does not contain the germ of an intelligence, it contains a dead-end algorithm designed to impress the gullible. That strategy has been the definition of "artificial intelligence" for the last thirty or forty years, at least. A real AI is not Watson + extra machinery to close the gap to a full conversational machine. Instead, a real AI involves throwing away Watson, starting from scratch, and doing the whole thing in a completely different way .... a way that actually allows the system to build its own knowledge, and use that knowledge in an ever-expanding range of ways. Richard Loosemore From eugen at leitl.org Mon Jan 31 21:33:08 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 31 Jan 2011 22:33:08 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: <20110131213308.GK23560@leitl.org> On Mon, Jan 31, 2011 at 01:03:13PM -0500, John Clark wrote: > Untestable, hmm, you mean like God planting pre-aged dinosaur bones in the ground 6000 years ago to fool us? Ahem. http://www.google.com/images?source=imghp&q=dinosaur+jesus From spike66 at att.net Mon Jan 31 21:21:31 2011 From: spike66 at att.net (spike) Date: Mon, 31 Jan 2011 13:21:31 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D471EA4.7080900@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: <00ab01cbc18c$d4e17a40$7ea46ec0$@att.net> ... On Behalf Of Richard Loosemore Subject: Re: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. spike wrote: >> ... On Behalf Of Richard Loosemore > ... >> ... My question is more pragmatic. Let us not worry for now about having > created intelligence, but rather the more practical and pragmatic question: > >> How far are we from creating software that will serve as an adequate > companion for the partially impaired elderly person? > >> We need machines which can tirelessly carry on stimulating > conversation. With the Watson experiment, I feel we are getting > tantalizingly close to that now... spike >...But that is *exactly* my point. We are not getting tantalizingly close, we are just doing the same old snake-oil con trick of building a system that works in a ridiculously narrow domain, and which impresses some people with the sheer breadth of information it stores inside it... Ja, and that is exactly my point as well. It impresses *some* people, perhaps the people we need to impress, in a ridiculously narrow domain. We have seen how gullible teenagers can carry a conversation with Eliza. There are plenty of people who were well past game playing by the time Eliza showed up in the 70s. These are now elderly and in many cases lonely. I would think we could harness Eliza or Watson and come up with a sorta humanish companion. It wouldn't be great, but there would be real time interaction that would be far better than nothing, and far better than daytime TV. A snake-oil con trick might work just fine for this application. For the better situated (financially) elderly, we can imagine coupling this conversation ability with an attractive young sexbot. Hmm, on second thought, that might not be such a good idea. I would be tempted to wait until dad takes his nap, turn off her volume and take advantage of her. Ok scratch the sexbot notion. But we could imagine building a conversational robot without simulated sex organs, even if the conversation is contrived and Eliza-ish. >...Watson does not contain the germ of an intelligence, it contains a dead-end algorithm designed to impress the gullible... Richard Loosemore Ja but keep in mind this isn't for you Richard, it is for your grandmother. She needs companionship after she starts developing dementia. Often humans won't bother conversing with an Alzheimers patient: it is too goddam frustrating. They make the same comment over and over and over, sometimes a dozen times in an hour, until the family flees. Then the patient doesn't even know what he or she said, and doesn't understand why no one wants to talk. A most cruel disease is Alzheimers. Machines would patiently keep repeating the same answers. I see it as one hell of a breakthough, even if we know it isn't artificial intelligence. I don't care if it isn't AI, all I want is something to keep my parents company 10 yrs from now. spike From stefano.vaj at gmail.com Mon Jan 31 23:39:28 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 1 Feb 2011 00:39:28 +0100 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: 2011/1/31 John Clark : > There is not one scrap of evidence that a teapot is in orbit around the > planet Uranus and there is not one scrap of evidence that there is not such > a vessel; nevertheless I do not treat both possibilities with equal respect > so I am not a Uranus teapot agnostic. I am a teapot atheist. Why, a teapot or Silver Surfer orbiting Uranus could in principle exist. A Supreme Being of the judeo-christian persuasion poses some additional (and I would say: some more radical) problems. Same as a unicorn who is both pink *and* intrinsically invisible. So, I am a teapot and Marvel-universe agnostic, and a Jahv?/Allah/God atheist... ;-) -- Stefano Vaj